Version v1.44

This commit is contained in:
Nick Craig-Wood
2018-10-15 11:03:08 +01:00
parent 7f0b204292
commit f96ce5674b
72 changed files with 32422 additions and 16975 deletions

File diff suppressed because it is too large Load Diff

3786
MANUAL.md

File diff suppressed because it is too large Load Diff

3673
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@ -165,7 +165,7 @@ def main():
%s %s
* Bug Fixes * Bug Fixes
%s %s
%s""" % (version, datetime.date.today(), "\n".join(new_features_lines), "\n".join(bugfix_lines), "\n".join(backend_lines))) %s""" % (next_version, datetime.date.today(), "\n".join(new_features_lines), "\n".join(bugfix_lines), "\n".join(backend_lines)))
sys.stdout.write(old_tail) sys.stdout.write(old_tail)

View File

@ -381,7 +381,7 @@ This value should be set no larger than 4.657GiB (== 5GB).
- Config: upload_cutoff - Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF - Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix - Type: SizeSuffix
- Default: 190.735M - Default: 200M
#### --b2-chunk-size #### --b2-chunk-size

View File

@ -1,11 +1,110 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Changelog" description: "Rclone Changelog"
date: "2018-09-01" date: "2018-10-15"
--- ---
# Changelog # Changelog
## v1.44 - 2018-10-15
* New commands
* serve ftp: Add ftp server (Antoine GIRARD)
* settier: perform storage tier changes on supported remotes (sandeepkru)
* New Features
* Reworked command line help
* Make default help less verbose (Nick Craig-Wood)
* Split flags up into global and backend flags (Nick Craig-Wood)
* Implement specialised help for flags and backends (Nick Craig-Wood)
* Show URL of backend help page when starting config (Nick Craig-Wood)
* stats: Long names now split in center (Joanna Marek)
* Add --log-format flag for more control over log output (dcpu)
* rc: Add support for OPTIONS and basic CORS (frenos)
* stats: show FatalErrors and NoRetryErrors in stats (Cédric Connes)
* Bug Fixes
* Fix -P not ending with a new line (Nick Craig-Wood)
* config: don't create default config dir when user supplies --config (albertony)
* Don't print non-ASCII characters with --progress on windows (Nick Craig-Wood)
* Correct logs for excluded items (ssaqua)
* Mount
* Remove EXPERIMENTAL tags (Nick Craig-Wood)
* VFS
* Fix race condition detected by serve ftp tests (Nick Craig-Wood)
* Add vfs/poll-interval rc command (Fabian Möller)
* Enable rename for nearly all remotes using server side Move or Copy (Nick Craig-Wood)
* Reduce directory cache cleared by poll-interval (Fabian Möller)
* Remove EXPERIMENTAL tags (Nick Craig-Wood)
* Local
* Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
* Preallocate files on Windows to reduce fragmentation (Nick Craig-Wood)
* Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
* Cache
* Add cache/fetch rc function (Fabian Möller)
* Fix worker scale down (Fabian Möller)
* Improve performance by not sending info requests for cached chunks (dcpu)
* Fix error return value of cache/fetch rc method (Fabian Möller)
* Documentation fix for cache-chunk-total-size (Anagh Kumar Baranwal)
* Preserve leading / in wrapped remote path (Fabian Möller)
* Add plex_insecure option to skip certificate validation (Fabian Möller)
* Remove entries that no longer exist in the source (dcpu)
* Crypt
* Preserve leading / in wrapped remote path (Fabian Möller)
* Alias
* Fix handling of Windows network paths (Nick Craig-Wood)
* Azure Blob
* Add --azureblob-list-chunk parameter (Santiago Rodríguez)
* Implemented settier command support on azureblob remote. (sandeepkru)
* Work around SDK bug which causes errors for chunk-sized files (Nick Craig-Wood)
* Box
* Implement link sharing. (Sebastian Bünger)
* Drive
* Add --drive-import-formats - google docs can now be imported (Fabian Möller)
* Rewrite mime type and extension handling (Fabian Möller)
* Add document links (Fabian Möller)
* Add support for multipart document extensions (Fabian Möller)
* Add support for apps-script to json export (Fabian Möller)
* Fix escaped chars in documents during list (Fabian Möller)
* Add --drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)
* Improve directory notifications in ChangeNotify (Fabian Möller)
* When listing team drives in config, continue on failure (Nick Craig-Wood)
* FTP
* Add a small pause after failed upload before deleting file (Nick Craig-Wood)
* Google Cloud Storage
* Fix service_account_file being ignored (Fabian Möller)
* Jottacloud
* Minor improvement in quota info (omit if unlimited) (albertony)
* Add --fast-list support (albertony)
* Add permanent delete support: --jottacloud-hard-delete (albertony)
* Add link sharing support (albertony)
* Fix handling of reserved characters. (Sebastian Bünger)
* Fix socket leak on Object.Remove (Nick Craig-Wood)
* Onedrive
* Rework to support Microsoft Graph (Cnly)
* **NB** this will require re-authenticating the remote
* Removed upload cutoff and always do session uploads (Oliver Heyme)
* Use single-part upload for empty files (Cnly)
* Fix new fields not saved when editing old config (Alex Chen)
* Fix sometimes special chars in filenames not replaced (Alex Chen)
* Ignore OneNote files by default (Alex Chen)
* Add link sharing support (jackyzy823)
* S3
* Use custom pacer, to retry operations when reasonable (Craig Miskell)
* Use configured server-side-encryption and storace class options when calling CopyObject() (Paul Kohout)
* Make --s3-v2-auth flag (Nick Craig-Wood)
* Fix v2 auth on files with spaces (Nick Craig-Wood)
* Union
* Implement union backend which reads from multiple backends (Felix Brucker)
* Implement optional interfaces (Move, DirMove, Copy etc) (Nick Craig-Wood)
* Fix ChangeNotify to support multiple remotes (Fabian Möller)
* Fix --backup-dir on union backend (Nick Craig-Wood)
* WebDAV
* Add another time format (Nick Craig-Wood)
* Add a small pause after failed upload before deleting file (Nick Craig-Wood)
* Add workaround for missing mtime (buergi)
* Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
* Yandex
* Remove redundant nil checks (teresy)
## v1.43.1 - 2018-09-07 ## v1.43.1 - 2018-09-07
Point release to fix hubic and azureblob backends. Point release to fix hubic and azureblob backends.

View File

@ -1,56 +1,22 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone" title: "rclone"
slug: rclone slug: rclone
url: /commands/rclone/ url: /commands/rclone/
--- ---
## rclone ## rclone
Sync files and directories to and from local and remote object stores - v1.43 Show help for rclone commands, flags and backends.
### Synopsis ### Synopsis
Rclone is a command line program to sync files and directories to and Rclone syncs files to and from cloud storage providers as well as
from various cloud storage systems and using file transfer services, such as: mounting them, listing them in lots of different ways.
* Amazon Drive See the home page (https://rclone.org/) for installation, usage,
* Amazon S3 documentation, changelog and configuration walkthroughs.
* Backblaze B2
* Box
* Dropbox
* FTP
* Google Cloud Storage
* Google Drive
* HTTP
* Hubic
* Jottacloud
* Mega
* Microsoft Azure Blob Storage
* Microsoft OneDrive
* OpenDrive
* Openstack Swift / Rackspace cloud files / Memset Memstore
* pCloud
* QingStor
* SFTP
* Webdav / Owncloud / Nextcloud
* Yandex Disk
* The local filesystem
Features
* MD5/SHA1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
* https://rclone.org/
``` ```
@ -60,259 +26,277 @@ rclone [flags]
### Options ### Options
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
-h, --help help for rclone --gcs-bucket-acl string Access Control List for new buckets.
--http-url string URL of http host to connect to --gcs-client-id string Google Application Client Id
--hubic-client-id string Hubic Client Id --gcs-client-secret string Google Application Client Secret
--hubic-client-secret string Hubic Client Secret --gcs-location string Location for the newly created buckets.
--ignore-checksum Skip post copy check of checksums. --gcs-object-acl string Access Control List for new objects.
--ignore-errors delete even if there are I/O errors --gcs-project-number string Project number.
--ignore-existing Skip all files that exist on destination --gcs-service-account-file string Service Account Credentials JSON file path
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
-I, --ignore-times Don't skip files that match size and time - transfer all files -h, --help help for rclone
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
-V, --version Print the version number --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-pass string Password. --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-user string User name --swift-user string User name to log in (OS_USERNAME).
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-id string Yandex Client Id --syslog Use Syslog for logging
--yandex-client-secret string Yandex Client Secret --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
@ -345,7 +329,7 @@ rclone [flags]
* [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path. * [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path.
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path. * [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist. * [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL** * [rclone mount](/commands/rclone_mount/) - Mount the remote as file system on a mountpoint.
* [rclone move](/commands/rclone_move/) - Move files from source to dest. * [rclone move](/commands/rclone_move/) - Move files from source to dest.
* [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest. * [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest.
* [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface. * [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface.
@ -356,6 +340,7 @@ rclone [flags]
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty. * [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path. * [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
* [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path. * [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path. * [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. * [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
@ -363,4 +348,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number. * [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone about" title: "rclone about"
slug: rclone_about slug: rclone_about
url: /commands/rclone_about/ url: /commands/rclone_about/
@ -69,261 +69,279 @@ rclone about remote: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone authorize" title: "rclone authorize"
slug: rclone_authorize slug: rclone_authorize
url: /commands/rclone_authorize/ url: /commands/rclone_authorize/
@ -28,261 +28,279 @@ rclone authorize [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone cachestats" title: "rclone cachestats"
slug: rclone_cachestats slug: rclone_cachestats
url: /commands/rclone_cachestats/ url: /commands/rclone_cachestats/
@ -27,261 +27,279 @@ rclone cachestats source: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone cat" title: "rclone cat"
slug: rclone_cat slug: rclone_cat
url: /commands/rclone_cat/ url: /commands/rclone_cat/
@ -49,261 +49,279 @@ rclone cat remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone check" title: "rclone check"
slug: rclone_check slug: rclone_check
url: /commands/rclone_check/ url: /commands/rclone_check/
@ -43,261 +43,279 @@ rclone check source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone cleanup" title: "rclone cleanup"
slug: rclone_cleanup slug: rclone_cleanup
url: /commands/rclone_cleanup/ url: /commands/rclone_cleanup/
@ -28,261 +28,279 @@ rclone cleanup remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config" title: "rclone config"
slug: rclone_config slug: rclone_config
url: /commands/rclone_config/ url: /commands/rclone_config/
@ -28,262 +28,280 @@ rclone config [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote <name>. * [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote <name>.
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON. * [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
@ -294,4 +312,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config create" title: "rclone config create"
slug: rclone_config_create slug: rclone_config_create
url: /commands/rclone_config_create/ url: /commands/rclone_config_create/
@ -33,261 +33,279 @@ rclone config create <name> <type> [<key> <value>]* [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config delete" title: "rclone config delete"
slug: rclone_config_delete slug: rclone_config_delete
url: /commands/rclone_config_delete/ url: /commands/rclone_config_delete/
@ -25,261 +25,279 @@ rclone config delete <name> [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config dump" title: "rclone config dump"
slug: rclone_config_dump slug: rclone_config_dump
url: /commands/rclone_config_dump/ url: /commands/rclone_config_dump/
@ -25,261 +25,279 @@ rclone config dump [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config edit" title: "rclone config edit"
slug: rclone_config_edit slug: rclone_config_edit
url: /commands/rclone_config_edit/ url: /commands/rclone_config_edit/
@ -28,261 +28,279 @@ rclone config edit [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config file" title: "rclone config file"
slug: rclone_config_file slug: rclone_config_file
url: /commands/rclone_config_file/ url: /commands/rclone_config_file/
@ -25,261 +25,279 @@ rclone config file [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config password" title: "rclone config password"
slug: rclone_config_password slug: rclone_config_password
url: /commands/rclone_config_password/ url: /commands/rclone_config_password/
@ -32,261 +32,279 @@ rclone config password <name> [<key> <value>]+ [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config providers" title: "rclone config providers"
slug: rclone_config_providers slug: rclone_config_providers
url: /commands/rclone_config_providers/ url: /commands/rclone_config_providers/
@ -25,261 +25,279 @@ rclone config providers [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config show" title: "rclone config show"
slug: rclone_config_show slug: rclone_config_show
url: /commands/rclone_config_show/ url: /commands/rclone_config_show/
@ -25,261 +25,279 @@ rclone config show [<remote>] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone config update" title: "rclone config update"
slug: rclone_config_update slug: rclone_config_update
url: /commands/rclone_config_update/ url: /commands/rclone_config_update/
@ -32,261 +32,279 @@ rclone config update <name> [<key> <value>]+ [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone copy" title: "rclone copy"
slug: rclone_copy slug: rclone_copy
url: /commands/rclone_copy/ url: /commands/rclone_copy/
@ -61,261 +61,279 @@ rclone copy source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone copyto" title: "rclone copyto"
slug: rclone_copyto slug: rclone_copyto
url: /commands/rclone_copyto/ url: /commands/rclone_copyto/
@ -51,261 +51,279 @@ rclone copyto source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone copyurl" title: "rclone copyurl"
slug: rclone_copyurl slug: rclone_copyurl
url: /commands/rclone_copyurl/ url: /commands/rclone_copyurl/
@ -28,261 +28,279 @@ rclone copyurl https://example.com dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone cryptcheck" title: "rclone cryptcheck"
slug: rclone_cryptcheck slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/ url: /commands/rclone_cryptcheck/
@ -53,261 +53,279 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone cryptdecode" title: "rclone cryptdecode"
slug: rclone_cryptdecode slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/ url: /commands/rclone_cryptdecode/
@ -37,261 +37,279 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone dbhashsum" title: "rclone dbhashsum"
slug: rclone_dbhashsum slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/ url: /commands/rclone_dbhashsum/
@ -30,261 +30,279 @@ rclone dbhashsum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone dedupe" title: "rclone dedupe"
slug: rclone_dedupe slug: rclone_dedupe
url: /commands/rclone_dedupe/ url: /commands/rclone_dedupe/
@ -106,261 +106,279 @@ rclone dedupe [mode] remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone delete" title: "rclone delete"
slug: rclone_delete slug: rclone_delete
url: /commands/rclone_delete/ url: /commands/rclone_delete/
@ -42,261 +42,279 @@ rclone delete remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone deletefile" title: "rclone deletefile"
slug: rclone_deletefile slug: rclone_deletefile
url: /commands/rclone_deletefile/ url: /commands/rclone_deletefile/
@ -29,261 +29,279 @@ rclone deletefile remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone genautocomplete" title: "rclone genautocomplete"
slug: rclone_genautocomplete slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/ url: /commands/rclone_genautocomplete/
@ -24,263 +24,281 @@ Run with --help to list the supported shells.
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone. * [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone. * [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone genautocomplete bash" title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/ url: /commands/rclone_genautocomplete_bash/
@ -40,261 +40,279 @@ rclone genautocomplete bash [output_file] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone genautocomplete zsh" title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/ url: /commands/rclone_genautocomplete_zsh/
@ -40,261 +40,279 @@ rclone genautocomplete zsh [output_file] [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell. * [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone gendocs" title: "rclone gendocs"
slug: rclone_gendocs slug: rclone_gendocs
url: /commands/rclone_gendocs/ url: /commands/rclone_gendocs/
@ -28,261 +28,279 @@ rclone gendocs output_directory [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone hashsum" title: "rclone hashsum"
slug: rclone_hashsum slug: rclone_hashsum
url: /commands/rclone_hashsum/ url: /commands/rclone_hashsum/
@ -42,261 +42,279 @@ rclone hashsum <hash> remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone link" title: "rclone link"
slug: rclone_link slug: rclone_link
url: /commands/rclone_link/ url: /commands/rclone_link/
@ -35,261 +35,279 @@ rclone link remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone listremotes" title: "rclone listremotes"
slug: rclone_listremotes slug: rclone_listremotes
url: /commands/rclone_listremotes/ url: /commands/rclone_listremotes/
@ -30,261 +30,279 @@ rclone listremotes [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone ls" title: "rclone ls"
slug: rclone_ls slug: rclone_ls
url: /commands/rclone_ls/ url: /commands/rclone_ls/
@ -59,261 +59,279 @@ rclone ls remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone lsd" title: "rclone lsd"
slug: rclone_lsd slug: rclone_lsd
url: /commands/rclone_lsd/ url: /commands/rclone_lsd/
@ -70,261 +70,279 @@ rclone lsd remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone lsf" title: "rclone lsf"
slug: rclone_lsf slug: rclone_lsf
url: /commands/rclone_lsf/ url: /commands/rclone_lsf/
@ -148,261 +148,279 @@ rclone lsf remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone lsjson" title: "rclone lsjson"
slug: rclone_lsjson slug: rclone_lsjson
url: /commands/rclone_lsjson/ url: /commands/rclone_lsjson/
@ -88,261 +88,279 @@ rclone lsjson remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone lsl" title: "rclone lsl"
slug: rclone_lsl slug: rclone_lsl
url: /commands/rclone_lsl/ url: /commands/rclone_lsl/
@ -59,261 +59,279 @@ rclone lsl remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone md5sum" title: "rclone md5sum"
slug: rclone_md5sum slug: rclone_md5sum
url: /commands/rclone_md5sum/ url: /commands/rclone_md5sum/
@ -28,261 +28,279 @@ rclone md5sum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone mkdir" title: "rclone mkdir"
slug: rclone_mkdir slug: rclone_mkdir
url: /commands/rclone_mkdir/ url: /commands/rclone_mkdir/
@ -25,261 +25,279 @@ rclone mkdir remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,12 +1,12 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone mount" title: "rclone mount"
slug: rclone_mount slug: rclone_mount
url: /commands/rclone_mount/ url: /commands/rclone_mount/
--- ---
## rclone mount ## rclone mount
Mount the remote as a mountpoint. **EXPERIMENTAL** Mount the remote as file system on a mountpoint.
### Synopsis ### Synopsis
@ -15,8 +15,6 @@ rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with mount any of Rclone's cloud storage systems as a file system with
FUSE. FUSE.
This is **EXPERIMENTAL** - use with care.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc. First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
Start the mount like this Start the mount like this
@ -91,8 +89,8 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the can't use retries in the same way without making local copies of the
uploads. Look at the **EXPERIMENTAL** [file caching](#file-caching) uploads. Look at the [file caching](#file-caching)
for solutions to make mount more reliable. for solutions to make mount mount more reliable.
### Attribute caching ### Attribute caching
@ -201,8 +199,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching ### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a used by rclone mount to make a cloud storage system work more like a
normal file system. normal file system.
@ -329,261 +325,279 @@ rclone mount remote:path /path/to/mountpoint [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone move" title: "rclone move"
slug: rclone_move slug: rclone_move
url: /commands/rclone_move/ url: /commands/rclone_move/
@ -45,261 +45,279 @@ rclone move source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone moveto" title: "rclone moveto"
slug: rclone_moveto slug: rclone_moveto
url: /commands/rclone_moveto/ url: /commands/rclone_moveto/
@ -54,261 +54,279 @@ rclone moveto source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone ncdu" title: "rclone ncdu"
slug: rclone_ncdu slug: rclone_ncdu
url: /commands/rclone_ncdu/ url: /commands/rclone_ncdu/
@ -52,261 +52,279 @@ rclone ncdu remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone obscure" title: "rclone obscure"
slug: rclone_obscure slug: rclone_obscure
url: /commands/rclone_obscure/ url: /commands/rclone_obscure/
@ -25,261 +25,279 @@ rclone obscure password [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone purge" title: "rclone purge"
slug: rclone_purge slug: rclone_purge
url: /commands/rclone_purge/ url: /commands/rclone_purge/
@ -29,261 +29,279 @@ rclone purge remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone rc" title: "rclone rc"
slug: rclone_rc slug: rclone_rc
url: /commands/rclone_rc/ url: /commands/rclone_rc/
@ -35,261 +35,279 @@ rclone rc commands parameter [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone rcat" title: "rclone rcat"
slug: rclone_rcat slug: rclone_rcat
url: /commands/rclone_rcat/ url: /commands/rclone_rcat/
@ -47,261 +47,279 @@ rclone rcat remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone rmdir" title: "rclone rmdir"
slug: rclone_rmdir slug: rclone_rmdir
url: /commands/rclone_rmdir/ url: /commands/rclone_rmdir/
@ -27,261 +27,279 @@ rclone rmdir remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone rmdirs" title: "rclone rmdirs"
slug: rclone_rmdirs slug: rclone_rmdirs
url: /commands/rclone_rmdirs/ url: /commands/rclone_rmdirs/
@ -35,261 +35,279 @@ rclone rmdirs remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone serve" title: "rclone serve"
slug: rclone_serve slug: rclone_serve
url: /commands/rclone_serve/ url: /commands/rclone_serve/
@ -31,264 +31,283 @@ rclone serve <protocol> [opts] <remote> [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP. * [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API. * [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -0,0 +1,469 @@
---
date: 2018-10-15T11:00:47+01:00
title: "rclone serve ftp"
slug: rclone_serve_ftp
url: /commands/rclone_serve_ftp/
---
## rclone serve ftp
Serve remote:path over FTP.
### Synopsis
rclone serve ftp implements a basic ftp server to serve the
remote over FTP protocol. This can be viewed with a ftp client
or you can make a remote of type ftp to read and write it.
### Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
#### Authentication
By default this will serve files without needing a login.
You can set a single username and password with the --user and --pass flags.
### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
backend. Changes made locally in the mount may appear immediately or
invalidate the cache. However, changes done on the remote will only
be picked up once the cache expires.
Alternatively, you can send a `SIGHUP` signal to rclone for
it to flush all directory caches, regardless of how old they are.
Assuming only one rclone instance is running, you can reset the cache
like this:
kill -SIGHUP $(pidof rclone)
If you configure rclone with a [remote control](/rc) then you can use
rclone rc to flush the whole directory cache:
rclone rc vfs/forget
Or individual files or directories:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
### File Buffering
The `--buffer-size` flag determines the amount of memory,
that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of
data in memory at all times. The buffered data is bound to one file
descriptor and won't be shared between multiple open file descriptors
of the same file.
This flag is a upper limit for the used memory per file descriptor.
The buffer will only use memory for data that is downloaded but not
not yet read. If the buffer is empty, only a small amount of memory
will be used.
The maximum memory used by rclone for buffering can be up to
`--buffer-size * open files`.
### File Caching
These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
can be controlled with `--cache-dir` or setting the appropriate
environment variable.
The cache has 4 different modes selected by `--vfs-cache-mode`.
The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
This will mean some operations are not possible
* Files can't be opened for both read AND write
* Files opened for write can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files open for read with O_TRUNC will be opened write only
* Files open for write only will behave as if O_TRUNC was supplied
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
* Files opened for write only can't be seeked
* Existing files opened for write must have O_TRUNC set
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
first.
This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
will be purged on a schedule according to `--vfs-cache-max-age`.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to
--low-level-retries times.
```
rclone serve ftp remote:path [flags]
```
### Options
```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for ftp
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--pass string Password for authentication. (empty value allow every password)
--passive-port string Passive port range to use. (default "30000-32000")
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
--user string User name for authentication. (default "anonymous")
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
```
### Options inherited from parent commands
```
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start.
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks. (default 4)
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-alternate-export Use alternate export URLs for google documents export.,
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: see export_formats
--drive-impersonate string Impersonate this user when using a service account.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-keep-revision-forever Keep new head revision of each file forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use file created date instead of modified date.,
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
--fast-list Use recursive list if available. Uses more memory but fewer transactions.
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-session-token string An AWS session token
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing new objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--s3-v2-auth If true use v2 authentication.
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone serve http" title: "rclone serve http"
slug: rclone_serve_http slug: rclone_serve_http
url: /commands/rclone_serve_http/ url: /commands/rclone_serve_http/
@ -115,8 +115,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching ### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a used by rclone mount to make a cloud storage system work more like a
normal file system. normal file system.
@ -241,261 +239,279 @@ rclone serve http remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone serve restic" title: "rclone serve restic"
slug: rclone_serve_restic slug: rclone_serve_restic
url: /commands/rclone_serve_restic/ url: /commands/rclone_serve_restic/
@ -161,261 +161,279 @@ rclone serve restic remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone serve webdav" title: "rclone serve webdav"
slug: rclone_serve_webdav slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/ url: /commands/rclone_serve_webdav/
@ -123,8 +123,6 @@ The maximum memory used by rclone for buffering can be up to
### File Caching ### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is These flags control the VFS file caching options. The VFS layer is
used by rclone mount to make a cloud storage system work more like a used by rclone mount to make a cloud storage system work more like a
normal file system. normal file system.
@ -250,261 +248,279 @@ rclone serve webdav remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -0,0 +1,325 @@
---
date: 2018-10-15T11:00:47+01:00
title: "rclone settier"
slug: rclone_settier
url: /commands/rclone_settier/
---
## rclone settier
Changes storage class/tier of objects in remote.
### Synopsis
rclone settier changes storage tier or class at remote if supported.
Few cloud storage services provides different storage classes on objects,
for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive,
Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier chages make objects not available to access immediately.
For example tiering to archive in azure blob storage makes objects in frozen state,
user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object
inaccessible.true
You can use it to tier single object
rclone settier Cool remote:path/file
Or use rclone filters to set tier on only specific files
rclone --include "*.txt" settier Hot remote:path/dir
Or just provide remote directory and all files in directory will be tiered
rclone settier tier remote:path/dir
```
rclone settier tier remote:path [flags]
```
### Options
```
-h, --help help for settier
```
### Options inherited from parent commands
```
--acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-id string Box App Client Id.
--box-client-secret string Box App Client Secret
--box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start.
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-password string The password of the Plex user
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-remote string Remote to cache.
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks. (default 4)
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password string Password or pass phrase for encryption.
--crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-remote string Remote to encrypt/decrypt.
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering (default)
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-alternate-export Use alternate export URLs for google documents export.,
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-id string Google Application Client Id
--drive-client-secret string Google Application Client Secret
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: see export_formats
--drive-impersonate string Impersonate this user when using a service account.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-keep-revision-forever Keep new head revision of each file forever.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--drive-use-created-date Use file created date instead of modified date.,
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--dropbox-client-id string Dropbox App Client Id
--dropbox-client-secret string Dropbox App Client Secret
-n, --dry-run Do a trial run with no permanent changes
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
--fast-list Use recursive list if available. Uses more memory but fewer transactions.
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, ncw
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
--gcs-object-acl string Access Control List for new objects.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
--ignore-errors delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files. Fail if existing files have been modified.
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--jottacloud-mountpoint string The mountpoint to use.
--jottacloud-pass string Password.
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-user string User Name
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--local-nounc string Disable UNC (long path names) conversion on Windows
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--max-transfer int Maximum size of data to transfer. (default off)
--mega-debug Output more debug from Mega.
--mega-hard-delete Delete files permanently rather than putting them into the trash.
--mega-pass string Password.
--mega-user string User name
--memprofile string Write memory profile to file
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--onedrive-client-id string Microsoft App Client Id
--onedrive-client-secret string Microsoft App Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--opendrive-password string Password.
--opendrive-username string Username
--pcloud-client-id string Pcloud App Client Id
--pcloud-client-secret string Pcloud App Client Secret
-P, --progress Show progress during transfer.
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-connection-retries int Number of connnection retries. (default 3)
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-zone string Zone to connect to.
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server.
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-pass string Password for authentication.
--rc-realm string realm for authentication (default "rclone")
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-access-key-id string AWS Access Key ID.
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-endpoint string Endpoint for S3 API.
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-location-constraint string Location constraint - must be set to match the Region.
--s3-provider string Choose your S3 provider.
--s3-region string Region to connect to.
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--s3-session-token string An AWS session token
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--s3-storage-class string The storage class to use when storing new objects in S3.
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--s3-v2-auth If true use v2 authentication.
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--sftp-pass string SSH password, leave blank to use ssh-agent.
--sftp-path-override string Override path used by SSH connection.
--sftp-port string SSH port, leave blank to use default (22)
--sftp-set-modtime Set the modified time on the remote if set. (default true)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--sftp-user string SSH username, leave blank for current username, ncw
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--swift-auth string Authentication URL for server (OS_AUTH_URL).
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--swift-key string API key or password (OS_PASSWORD).
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME).
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
```
### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone sha1sum" title: "rclone sha1sum"
slug: rclone_sha1sum slug: rclone_sha1sum
url: /commands/rclone_sha1sum/ url: /commands/rclone_sha1sum/
@ -28,261 +28,279 @@ rclone sha1sum remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone size" title: "rclone size"
slug: rclone_size slug: rclone_size
url: /commands/rclone_size/ url: /commands/rclone_size/
@ -26,261 +26,279 @@ rclone size remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone sync" title: "rclone sync"
slug: rclone_sync slug: rclone_sync
url: /commands/rclone_sync/ url: /commands/rclone_sync/
@ -44,261 +44,279 @@ rclone sync source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone touch" title: "rclone touch"
slug: rclone_touch slug: rclone_touch
url: /commands/rclone_touch/ url: /commands/rclone_touch/
@ -27,261 +27,279 @@ rclone touch remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone tree" title: "rclone tree"
slug: rclone_tree slug: rclone_tree
url: /commands/rclone_tree/ url: /commands/rclone_tree/
@ -68,261 +68,279 @@ rclone tree remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -1,5 +1,5 @@
--- ---
date: 2018-09-01T12:54:54+01:00 date: 2018-10-15T11:00:47+01:00
title: "rclone version" title: "rclone version"
slug: rclone_version slug: rclone_version
url: /commands/rclone_version/ url: /commands/rclone_version/
@ -53,261 +53,279 @@ rclone version [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob, supports hot, cool and archive tiers. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-sas-url string SAS URL for container level access only --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 256M) --azureblob-sas-url string SAS URL for container level access only
--b2-account string Account ID or Application Key ID --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-account string Account ID or Application Key ID
--b2-endpoint string Endpoint for the service. --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 190.735M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size int In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration Interval at which chunk cleanup runs (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk. Lower value good for slow connections but can affect seamless reading. (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Purge the cache DB before --cache-db-path string Directory to store file structure metadata DB. (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How much time should object info (file size, file hashes etc) be stored in cache. (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-plex-password string The password of the Plex user --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-url string The URL of the Plex server --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-username string The username of the Plex user --cache-plex-password string The password of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-plex-url string The URL of the Plex server
--cache-remote string Remote to cache. --cache-plex-username string The username of the Plex user
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1) --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage --cache-remote string Remote to cache.
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-writes Will cache file data on writes through the FS --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--checkers int Number of checkers to run in parallel. (default 8) --cache-workers int How many workers should run in parallel to download chunks. (default 4)
-c, --checksum Skip based on checksum & size, not mod-time & size --cache-writes Cache file data on writes through the FS
--config string Config file. (default "/home/ncw/.rclone.conf") --checkers int Number of checkers to run in parallel. (default 8)
--contimeout duration Connect timeout (default 1m0s) -c, --checksum Skip based on checksum & size, not mod-time & size
-L, --copy-links Follow symlinks and copy the pointed to item. --config string Config file. (default "/home/ncw/.rclone.conf")
--cpuprofile string Write cpu profile to file --contimeout duration Connect timeout (default 1m0s)
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --cpuprofile string Write cpu profile to file
--crypt-password string Password or pass phrase for encryption. --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-remote string Remote to encrypt/decrypt. --crypt-password string Password or pass phrase for encryption.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--delete-after When synchronizing, delete files on destination after transfering (default) --crypt-remote string Remote to encrypt/decrypt.
--delete-before When synchronizing, delete files on destination before transfering --crypt-show-mapping For all files listed show how the names encrypt.
--delete-during When synchronizing, delete files during transfer --delete-after When synchronizing, delete files on destination after transfering (default)
--delete-excluded Delete files on dest excluded from sync --delete-before When synchronizing, delete files on destination before transfering
--disable string Disable a comma separated list of features. Use help to see a list. --delete-during When synchronizing, delete files during transfer
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --delete-excluded Delete files on dest excluded from sync
--drive-alternate-export Use alternate export URLs for google documents export. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-client-id string Google Application Client Id --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-client-secret string Google Application Client Secret --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-impersonate string Impersonate this user when using a service account. --drive-client-id string Google Application Client Id
--drive-keep-revision-forever Keep new head revision forever. --drive-client-secret string Google Application Client Secret
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-root-folder-id string ID of the root folder --drive-formats string Deprecated: see export_formats
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-impersonate string Impersonate this user when using a service account.
--drive-service-account-file string Service Account Credentials JSON file path --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-shared-with-me Only show files that are shared with me --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-skip-gdocs Skip google documents in all listings. --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-trashed-only Only show files that are in the trash --drive-root-folder-id string ID of the root folder
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-use-created-date Use created date instead of modified date. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-service-account-file string Service Account Credentials JSON file path
--dropbox-chunk-size SizeSuffix Upload chunk size. Max 150M. (default 48M) --drive-shared-with-me Only show files that are shared with me.
--dropbox-client-id string Dropbox App Client Id --drive-skip-gdocs Skip google documents in all listings.
--dropbox-client-secret string Dropbox App Client Secret --drive-team-drive string ID of the Team Drive
-n, --dry-run Do a trial run with no permanent changes --drive-trashed-only Only show files that are in the trash.
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --drive-use-created-date Use file created date instead of modified date.,
--dump-headers Dump HTTP bodies - may contain sensitive info --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--exclude stringArray Exclude files matching pattern --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--exclude-from stringArray Read exclude patterns from file --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
--exclude-if-present string Exclude directories if filename is present --dropbox-client-id string Dropbox App Client Id
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --dropbox-client-secret string Dropbox App Client Secret
--files-from stringArray Read list of source-file names from file -n, --dry-run Do a trial run with no permanent changes
-f, --filter stringArray Add a file-filtering rule --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--filter-from stringArray Read filtering patterns from a file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--ftp-host string FTP host to connect to --dump-headers Dump HTTP bodies - may contain sensitive info
--ftp-pass string FTP password --exclude stringArray Exclude files matching pattern
--ftp-port string FTP port, leave blank to use default (21) --exclude-from stringArray Read exclude patterns from file
--ftp-user string FTP username, leave blank for current username, ncw --exclude-if-present string Exclude directories if filename is present
--gcs-bucket-acl string Access Control List for new buckets. --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--gcs-client-id string Google Application Client Id --files-from stringArray Read list of source-file names from file
--gcs-client-secret string Google Application Client Secret -f, --filter stringArray Add a file-filtering rule
--gcs-location string Location for the newly created buckets. --filter-from stringArray Read filtering patterns from a file
--gcs-object-acl string Access Control List for new objects. --ftp-host string FTP host to connect to
--gcs-project-number string Project number. --ftp-pass string FTP password
--gcs-service-account-file string Service Account Credentials JSON file path --ftp-port string FTP port, leave blank to use default (21)
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --ftp-user string FTP username, leave blank for current username, ncw
--http-url string URL of http host to connect to --gcs-bucket-acl string Access Control List for new buckets.
--hubic-client-id string Hubic Client Id --gcs-client-id string Google Application Client Id
--hubic-client-secret string Hubic Client Secret --gcs-client-secret string Google Application Client Secret
--ignore-checksum Skip post copy check of checksums. --gcs-location string Location for the newly created buckets.
--ignore-errors delete even if there are I/O errors --gcs-object-acl string Access Control List for new objects.
--ignore-existing Skip all files that exist on destination --gcs-project-number string Project number.
--ignore-size Ignore size when skipping use mod-time or checksum. --gcs-service-account-file string Service Account Credentials JSON file path
-I, --ignore-times Don't skip files that match size and time - transfer all files --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--immutable Do not modify files. Fail if existing files have been modified. --http-url string URL of http host to connect to
--include stringArray Include files matching pattern --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--include-from stringArray Read include patterns from file --hubic-client-id string Hubic Client Id
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --hubic-client-secret string Hubic Client Secret
--jottacloud-mountpoint string The mountpoint to use. --ignore-checksum Skip post copy check of checksums.
--jottacloud-pass string Password. --ignore-errors delete even if there are I/O errors
--jottacloud-user string User Name --ignore-existing Skip all files that exist on destination
--local-no-check-updated Don't check to see if the files change during upload --ignore-size Ignore size when skipping use mod-time or checksum.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames -I, --ignore-times Don't skip files that match size and time - transfer all files
--local-nounc string Disable UNC (long path names) conversion on Windows --immutable Do not modify files. Fail if existing files have been modified.
--log-file string Log everything to this file --include stringArray Include files matching pattern
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --include-from stringArray Read include patterns from file
--low-level-retries int Number of low level retries to do. (default 10) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --jottacloud-mountpoint string The mountpoint to use.
--max-delete int When synchronizing, limit the number of deletes (default -1) --jottacloud-pass string Password.
--max-depth int If set limits the recursion depth to this. (default -1) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --jottacloud-user string User Name
--max-transfer int Maximum size of data to transfer. (default off) --local-no-check-updated Don't check to see if the files change during upload
--mega-debug Output more debug from Mega. --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --local-nounc string Disable UNC (long path names) conversion on Windows
--mega-pass string Password. --log-file string Log everything to this file
--mega-user string User name --log-format string Comma separated list of log format options (default "date,time")
--memprofile string Write memory profile to file --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --low-level-retries int Number of low level retries to do. (default 10)
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--modify-window duration Max time diff to be considered the same (default 1ns) --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--no-check-certificate Do not verify the server SSL certificate. Insecure. --max-delete int When synchronizing, limit the number of deletes (default -1)
--no-gzip-encoding Don't set Accept-Encoding: gzip. --max-depth int If set limits the recursion depth to this. (default -1)
--no-traverse Obsolete - does nothing. --max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--no-update-modtime Don't update destination mod-time if files identical. --max-transfer int Maximum size of data to transfer. (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --mega-debug Output more debug from Mega.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--onedrive-client-id string Microsoft App Client Id --mega-pass string Password.
--onedrive-client-secret string Microsoft App Client Secret --mega-user string User name
--opendrive-password string Password. --memprofile string Write memory profile to file
--opendrive-username string Username --min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--pcloud-client-id string Pcloud App Client Id --min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off)
--pcloud-client-secret string Pcloud App Client Secret --modify-window duration Max time diff to be considered the same (default 1ns)
-P, --progress Show progress during transfer. --no-check-certificate Do not verify the server SSL certificate. Insecure.
--qingstor-access-key-id string QingStor Access Key ID --no-gzip-encoding Don't set Accept-Encoding: gzip.
--qingstor-connection-retries int Number of connnection retries. (default 3) --no-traverse Obsolete - does nothing.
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --no-update-modtime Don't update destination mod-time if files identical.
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--qingstor-secret-access-key string QingStor Secret Access Key (password) --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--qingstor-zone string Zone to connect to. --onedrive-client-id string Microsoft App Client Id
-q, --quiet Print as little stuff as possible --onedrive-client-secret string Microsoft App Client Secret
--rc Enable the remote control server. --onedrive-drive-id string The ID of the drive to use
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--rc-client-ca string Client certificate authority to verify clients with --opendrive-password string Password.
--rc-htpasswd string htpasswd file - if not provided no authentication is done --opendrive-username string Username
--rc-key string SSL PEM Private key --pcloud-client-id string Pcloud App Client Id
--rc-max-header-bytes int Maximum size of request header (default 4096) --pcloud-client-secret string Pcloud App Client Secret
--rc-pass string Password for authentication. -P, --progress Show progress during transfer.
--rc-realm string realm for authentication (default "rclone") --qingstor-access-key-id string QingStor Access Key ID
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --qingstor-connection-retries int Number of connnection retries. (default 3)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-user string User name for authentication. --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--retries int Retry operations this many times if they fail (default 3) --qingstor-secret-access-key string QingStor Secret Access Key (password)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --qingstor-zone string Zone to connect to.
--s3-access-key-id string AWS Access Key ID. -q, --quiet Print as little stuff as possible
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3. --rc Enable the remote control server.
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5M) --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--s3-endpoint string Endpoint for S3 API. --rc-client-ca string Client certificate authority to verify clients with
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-htpasswd string htpasswd file - if not provided no authentication is done
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --rc-key string SSL PEM Private key
--s3-location-constraint string Location constraint - must be set to match the Region. --rc-max-header-bytes int Maximum size of request header (default 4096)
--s3-provider string Choose your S3 provider. --rc-pass string Password for authentication.
--s3-region string Region to connect to. --rc-realm string realm for authentication (default "rclone")
--s3-secret-access-key string AWS Secret Access Key (password) --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --rc-user string User name for authentication.
--s3-storage-class string The storage class to use when storing objects in S3. --retries int Retry operations this many times if they fail (default 3)
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--sftp-ask-password Allow asking for SFTP password when needed. --s3-access-key-id string AWS Access Key ID.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3.
--sftp-host string SSH host to connect to --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-disable-checksum Don't store MD5 checksum with object metadata
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-endpoint string Endpoint for S3 API.
--sftp-path-override string Override path used by SSH connection. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--sftp-port string SSH port, leave blank to use default (22) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-provider string Choose your S3 provider.
--sftp-user string SSH username, leave blank for current username, ncw --s3-region string Region to connect to.
--size-only Skip based on size only, not mod-time or checksum --s3-secret-access-key string AWS Secret Access Key (password)
--skip-links Don't warn about skipped symlinks. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --s3-session-token string An AWS session token
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --s3-storage-class string The storage class to use when storing new objects in S3.
--stats-one-line Make the stats fit on one line. --s3-upload-concurrency int Concurrency for multipart uploads. (default 2)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --s3-v2-auth If true use v2 authentication.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-ask-password Allow asking for SFTP password when needed.
--suffix string Suffix for use with --backup-dir. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-host string SSH host to connect to
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-pass string SSH password, leave blank to use ssh-agent.
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --sftp-path-override string Override path used by SSH connection.
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --sftp-port string SSH port, leave blank to use default (22)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-key string API key or password (OS_PASSWORD). --sftp-user string SSH username, leave blank for current username, ncw
--swift-region string Region name - optional (OS_REGION_NAME) --size-only Skip based on size only, not mod-time or checksum
--swift-storage-policy string The storage policy to use when creating a new container --skip-links Don't warn about skipped symlinks.
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --stats-one-line Make the stats fit on one line.
--swift-user string User name to log in (OS_USERNAME). --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--syslog Use Syslog for logging --suffix string Suffix for use with --backup-dir.
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth string Authentication URL for server (OS_AUTH_URL).
--timeout duration IO idle timeout (default 5m0s) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--tpslimit float Limit HTTP transactions per second to this. --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--transfers int Number of file transfers to run in parallel. (default 4) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
-u, --update Skip files that are newer on the destination. --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--use-server-modtime Use server modified time instead of object metadata --swift-key string API key or password (OS_PASSWORD).
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43") --swift-region string Region name - optional (OS_REGION_NAME)
-v, --verbose count Print lots more stuff (repeat for more) --swift-storage-policy string The storage policy to use when creating a new container
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--webdav-pass string Password. --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-url string URL of http host to connect to --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-user string User name --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-vendor string Name of the Webdav site/service/software you are using --swift-user string User name to log in (OS_USERNAME).
--yandex-client-id string Yandex Client Id --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--yandex-client-secret string Yandex Client Secret --syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.44")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43 * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 1-Sep-2018 ###### Auto generated by spf13/cobra on 15-Oct-2018

View File

@ -81,6 +81,33 @@ Eg
rclone rc cache/expire remote=path/to/sub/folder/ rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true rclone rc cache/expire remote=/ withData=true
### cache/fetch: Fetch file chunks
Ensure the specified file chunks are cached on disk.
The chunks= parameter specifies the file chunks to check.
It takes a comma separated list of array slice indices.
The slice indices are similar to Python slices: start[:end]
start is the 0 based chunk number from the beginning of the file
to fetch inclusive. end is 0 based chunk number from the beginning
of the file to fetch exclisive.
Both values can be negative, in which case they count from the back
of the file. The value "-5:" represents the last 5 chunks of a file.
Some valid examples are:
":5,-5:" -> the first and last five chunks
"0,-2" -> the first and the second last chunk
"0:10" -> the first ten chunks
Any parameter with a key that starts with "file" can be used to
specify files to fetch, eg
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
File names will automatically be encrypted when the a crypt remote
is used on top of the cache.
### cache/stats: Get cache stats ### cache/stats: Get cache stats
Show statistics for the cache remote. Show statistics for the cache remote.
@ -133,6 +160,8 @@ Returns the following values:
"speed": average speed in bytes/sec since start of the process, "speed": average speed in bytes/sec since start of the process,
"bytes": total transferred bytes since the start of the process, "bytes": total transferred bytes since the start of the process,
"errors": number of errors, "errors": number of errors,
"fatalError": whether there has been at least one FatalError,
"retryError": whether there has been at least one non-NoRetryError,
"checks": number of checked files, "checks": number of checked files,
"transfers": number of transferred files, "transfers": number of transferred files,
"deletes" : number of deleted files, "deletes" : number of deleted files,
@ -189,6 +218,28 @@ starting with dir will forget that dir, eg
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
### vfs/poll-interval: Get the status or update the value of the poll-interval option.
Without any parameter given this returns the current status of the
poll-interval setting.
When the interval=duration parameter is set, the poll-interval value
is updated and the polling function is notified.
Setting interval=0 disables poll-interval.
rclone rc vfs/poll-interval interval=5m
The timeout=duration parameter can be used to specify a time to wait
for the current poll function to apply the new value.
If timeout is less or equal 0, which is the default, wait indefinitely.
The new poll-interval value will only be active when the timeout is
not reached.
If poll-interval is updated or disabled temporarily, some changes
might not get picked up by the polling function, depending on the
used remote.
### vfs/refresh: Refresh the directory cache. ### vfs/refresh: Refresh the directory cache.
This reads the directories for the specified paths and freshens the This reads the directories for the specified paths and freshens the

View File

@ -1 +1 @@
v1.43 v1.44

View File

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
var Version = "v1.43-DEV" var Version = "v1.44"

5570
rclone.1

File diff suppressed because it is too large Load Diff