--- title: "rclone" description: "Show help for rclone commands, flags and backends." # autogenerated - DO NOT EDIT, instead edit the source code in cmd/ and as part of making a release run "make commanddocs" --- # rclone Show help for rclone commands, flags and backends. ## Synopsis Rclone syncs files to and from cloud storage providers as well as mounting them, listing them in lots of different ways. See the home page (https://rclone.org/) for installation, usage, documentation, changelog and configuration walkthroughs. ``` rclone [flags] ``` ## Options ``` --alias-description string Description of the remote --alias-remote string Remote or path to alias --ask-password Allow prompt for password for encrypted configuration (default true) --auto-confirm If enabled, do not request console confirmation --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name --azureblob-archive-tier-delete Delete archive tier blobs before overwriting --azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi) --azureblob-client-certificate-password string Password for the certificate file (optional) (obscured) --azureblob-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key --azureblob-client-id string The ID of the client in use --azureblob-client-secret string One of the service principal's client secrets --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion --azureblob-description string Description of the remote --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-disable-checksum Don't store MD5 checksum with object metadata --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) --azureblob-endpoint string Endpoint for the service --azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azureblob-key string Storage Account Shared Key --azureblob-list-chunk int Size of blob list (default 5000) --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any --azureblob-no-check-container If set, don't attempt to check the container exists or create it --azureblob-no-head-object If set, do not do HEAD before GET when getting objects --azureblob-password string The user's password (obscured) --azureblob-public-access string Public access level of a container: blob or container --azureblob-sas-url string SAS URL for container level access only --azureblob-service-principal-file string Path to file containing credentials for use with a service principal --azureblob-tenant string ID of the service principal's tenant. Also called its directory ID --azureblob-upload-concurrency int Concurrency for multipart uploads (default 16) --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated) --azureblob-use-emulator Uses local storage emulator if provided as 'true' --azureblob-use-msi Use a managed service identity to authenticate (only works in Azure) --azureblob-username string User name (usually an email address) --azurefiles-account string Azure Storage Account Name --azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi) --azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured) --azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key --azurefiles-client-id string The ID of the client in use --azurefiles-client-secret string One of the service principal's client secrets --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth --azurefiles-connection-string string Azure Files Connection String --azurefiles-description string Description of the remote --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot) --azurefiles-endpoint string Endpoint for the service --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI) --azurefiles-key string Storage Account Shared Key --azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi) --azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any --azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any --azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any --azurefiles-password string The user's password (obscured) --azurefiles-sas-url string SAS URL --azurefiles-service-principal-file string Path to file containing credentials for use with a service principal --azurefiles-share-name string Azure Files Share Name --azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID --azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16) --azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure) --azurefiles-username string User name (usually an email address) --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) --b2-description string Description of the remote --b2-disable-checksum Disable checksums for large (> upload cutoff) files --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-url string Custom endpoint for downloads --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-version-at Time Show file versions as they were at the specified time (default off) --b2-versions Include old versions in directory listings --backup-dir string Make backups into hierarchy based in DIR --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name --box-access-token string Box App Primary Access Token --box-auth-url string Auth server URL --box-box-config-file string Box App config.json location --box-box-sub-type string (default "user") --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) --box-description string Description of the remote --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) --box-owned-by string Only show items owned by the login (email address) passed in --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point --box-token string OAuth Access Token as a JSON blob --box-token-url string Token server url --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi) --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable --ca-cert stringArray CA certificate used to verify servers --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage (default 1m0s) --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming --cache-chunk-path string Directory to cache chunk files (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-size SizeSuffix The size of a chunk (partial file data) (default 5Mi) --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk (default 10Gi) --cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-description string Description of the remote --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone") --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) --cache-plex-url string The URL of the Plex server --cache-plex-username string The username of the Plex user --cache-read-retries int How many times to retry a read from a cache storage (default 10) --cache-remote string Remote to cache --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-writes Cache file data on writes through the FS --check-first Do all the checks before starting transfers --checkers int Number of checkers to run in parallel (default 8) -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) --chunker-description string Description of the remote --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO) --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining --compare-dest stringArray Include additional server-side paths during comparison --compress-description string Description of the remote --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress --config string Config file (default "$HOME/.config/rclone/rclone.conf") --contimeout Duration Connect timeout (default 1m0s) --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination -L, --copy-links Follow symlinks and copy the pointed to item --cpuprofile string Write cpu profile to file --crypt-description string Description of the remote --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true) --crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32") --crypt-filename-encryption string How to encrypt the filenames (default "standard") --crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted --crypt-pass-bad-blocks If set this will pass bad blocks through as all 0 --crypt-password string Password or pass phrase for encryption (obscured) --crypt-password2 string Password or pass phrase for salt (obscured) --crypt-remote string Remote to encrypt/decrypt --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-show-mapping For all files listed show how the names encrypt --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted --crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD) --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features (use --disable help to see a list) --disable-http-keep-alives Disable HTTP keep-alives and use each connection once --disable-http2 Disable HTTP/2 in the global transport --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded --drive-allow-import-name-change Allow the filetype to change when uploading Google docs --drive-auth-owner-only Only consider files owned by the authenticated user --drive-auth-url string Auth server URL --drive-chunk-size SizeSuffix Upload chunk size (default 8Mi) --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut --drive-description string Description of the remote --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg") --drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true) --drive-formats string Deprecated: See export_formats --drive-impersonate string Impersonate this user when using a service account --drive-import-formats string Comma separated list of preferred formats for uploading Google docs --drive-keep-revision-forever Keep new head revision of each file forever --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000) --drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off) --drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read) --drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --drive-pacer-burst int Number of API calls to allow without sleeping (default 100) --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms) --drive-resource-key string Resource key for accessing a link-shared file --drive-root-folder-id string ID of the root folder --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive --drive-server-side-across-configs Deprecated: use --server-side-across-configs instead --drive-service-account-credentials string Service Account Credentials JSON blob --drive-service-account-file string Service Account Credentials JSON file path --drive-shared-with-me Only show files that are shared with me --drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings --drive-size-as-quota Show sizes as storage quota usage, not actual size --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only --drive-skip-dangling-shortcuts If set skip dangling shortcut files --drive-skip-gdocs Skip google documents in all listings --drive-skip-shortcuts If set skip shortcut files --drive-starred-only Only show files that are starred --drive-stop-on-download-limit Make download limit errors be fatal --drive-stop-on-upload-limit Make upload limit errors be fatal --drive-team-drive string ID of the Shared Drive (Team Drive) --drive-token string OAuth Access Token as a JSON blob --drive-token-url string Token server url --drive-trashed-only Only show files that are in the trash --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi) --drive-use-created-date Use file created date instead of modified date --drive-use-shared-date Use date file was shared instead of modified date --drive-use-trash Send files to the trash instead of deleting permanently (default true) --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off) --dropbox-auth-url string Auth server URL --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s) --dropbox-batch-mode string Upload file batching sync|async|off (default "sync") --dropbox-batch-size int Max number of files in upload batch --dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret --dropbox-description string Description of the remote --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths --dropbox-shared-files Instructs rclone to work on individual shared files --dropbox-shared-folders Instructs rclone to work on shared folders --dropbox-token string OAuth Access Token as a JSON blob --dropbox-token-url string Token server url -n, --dry-run Do a trial run with no permanent changes --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper --dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dump-headers Dump HTTP headers - may contain sensitive info --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) --exclude-if-present stringArray Exclude directories if filename is present --expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available; uses more memory but fewer transactions --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-cdn Set if you wish to use CDN download links --fichier-description string Description of the remote --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-shared-folder string If you want to download a shared folder, add this parameter --filefabric-description string Description of the remote --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-permanent-token string Permanent Authentication Token --filefabric-root-folder-id string ID of the root folder --filefabric-token string Session Token --filefabric-token-expiry string Token expiry time --filefabric-url string URL of the Enterprise File Fabric to connect to --filefabric-version string Version read from the file fabric --files-from stringArray Read list of source-file names from file (use - to read from stdin) --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) --filescom-api-key string The API key used to authenticate with Files.com --filescom-description string Description of the remote --filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --filescom-password string The password used to authenticate with Files.com (obscured) --filescom-site string Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com) --filescom-username string The username used to authenticate with Files.com -f, --filter stringArray Add a file filtering rule --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) --fix-case Force rename of case insensitive dest to match source --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) --ftp-ask-password Allow asking for FTP password when needed --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s) --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited --ftp-description string Description of the remote --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) --ftp-disable-utf8 Disable using UTF-8 even if server advertises support --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-host string FTP host to connect to --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-pass string FTP password (obscured) --ftp-port int FTP port number (default 21) --ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s) --ftp-socks-proxy string Socks 5 proxy host --ftp-tls Use Implicit FTPS (FTP over TLS) --ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32) --ftp-user string FTP username (default "$USER") --ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk) --gcs-anonymous Access public buckets and objects without credentials --gcs-auth-url string Auth server URL --gcs-bucket-acl string Access Control List for new buckets --gcs-bucket-policy-only Access checks should use bucket-level IAM policies --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-decompress If set this will decompress gzip encoded objects --gcs-description string Description of the remote --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-endpoint string Endpoint for the service --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-location string Location for the newly created buckets --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-object-acl string Access Control List for new objects --gcs-project-number string Project number --gcs-service-account-file string Service Account Credentials JSON file path --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage --gcs-token string OAuth Access Token as a JSON blob --gcs-token-url string Token server url --gcs-user-project string User project --gofile-access-token string API Access token --gofile-account-id string Account ID --gofile-description string Description of the remote --gofile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftPeriod,RightPeriod,InvalidUtf8,Dot,Exclamation) --gofile-list-chunk int Number of items to list in each call (default 1000) --gofile-root-folder-id string ID of the root folder --gphotos-auth-url string Auth server URL --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s) --gphotos-batch-mode string Upload file batching sync|async|off (default "sync") --gphotos-batch-size int Max number of files in upload batch --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret --gphotos-description string Description of the remote --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-include-archived Also view and download archived media --gphotos-read-only Set to make the Google Photos backend read only --gphotos-read-size Set to read the size of media items --gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000) --gphotos-token string OAuth Access Token as a JSON blob --gphotos-token-url string Token server url --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default) --hasher-description string Description of the remote --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy --hdfs-description string Description of the remote --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-namenode CommaSepList Hadoop name nodes and ports --hdfs-service-principal-name string Kerberos service principal name for the namenode --hdfs-username string Hadoop user name --header stringArray Set HTTP header for all transactions --header-download stringArray Set HTTP header for download transactions --header-upload stringArray Set HTTP header for upload transactions -h, --help help for rclone --hidrive-auth-url string Auth server URL --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) --hidrive-client-id string OAuth Client Id --hidrive-client-secret string OAuth Client Secret --hidrive-description string Description of the remote --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot) --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") --hidrive-root-prefix string The root/parent folder for all paths (default "/") --hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw") --hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user") --hidrive-token string OAuth Access Token as a JSON blob --hidrive-token-url string Token server url --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) --http-description string Description of the remote --http-headers CommaSepList Set HTTP headers for all transactions --http-no-escape Do not escape URL metacharacters in path names --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of HTTP host to connect to --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi --ignore-case Ignore case in filters (case insensitive) --ignore-case-sync Ignore case when synchronizing --ignore-checksum Skip post copy check of checksums --ignore-errors Delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --imagekit-description string Description of the remote --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket) --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true --imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2" --imagekit-versions Include old versions in directory listings --immutable Do not modify files, fail if existing files have been modified --include stringArray Include files matching pattern --include-from stringArray Read file include patterns from file (use - to read from stdin) --inplace Download directly to destination file instead of atomic download to temp/rename -i, --interactive Enable interactive mode --internetarchive-access-key-id string IAS3 Access Key --internetarchive-description string Description of the remote --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") --internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org") --internetarchive-secret-access-key string IAS3 Secret Key (password) --internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s) --jottacloud-auth-url string Auth server URL --jottacloud-client-id string OAuth Client Id --jottacloud-client-secret string OAuth Client Secret --jottacloud-description string Description of the remote --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them --jottacloud-token string OAuth Access Token as a JSON blob --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) --koofr-description string Description of the remote --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use --koofr-mountid string Mount ID of the mount to use --koofr-password string Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password (obscured) --koofr-provider string Choose your storage provider --koofr-setmtime Does the backend support setting modification time (default true) --koofr-user string Your user name --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) --linkbox-description string Description of the remote --linkbox-token string Token from https://www.linkbox.to/admin/account -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive --local-description string Description of the remote --local-encoding Encoding The encoding for the backend (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-clone Disable reflink cloning for server-side copies --local-no-preallocate Disable preallocation of disk space for transferred files --local-no-set-modtime Disable setting modtime --local-no-sparse Disable sparse files for multi-thread downloads --local-nounc Disable UNC (long path names) conversion on Windows --local-time-type mtime|atime|btime|ctime Set what kind of time is returned (default mtime) --local-unicode-normalization Apply unicode NFC normalization to paths and filenames --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated) --log-file string Log everything to this file --log-format string Comma separated list of log format options (default "date,time") --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE) --log-systemd Activate systemd integration for the logger --low-level-retries int Number of low level retries to do (default 10) --mailru-auth-url string Auth server URL --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-client-id string OAuth Client Id --mailru-client-secret string OAuth Client Secret --mailru-description string Description of the remote --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf") --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi) --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi) --mailru-token string OAuth Access Token as a JSON blob --mailru-token-url string Token server url --mailru-user string User name (usually email) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --max-backlog int Maximum number of objects in sync or check backlog (default 10000) --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) --max-depth int If set limits the recursion depth to this (default -1) --max-duration Duration Maximum duration rclone will transfer data for (default 0s) --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --mega-debug Output more debug from Mega --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash --mega-pass string Password (obscured) --mega-use-https Use HTTPS for transfers --mega-user string User name --memory-description string Description of the remote --memprofile string Write memory profile to file -M, --metadata If set, preserve metadata when copying objects --metadata-exclude stringArray Exclude metadatas matching pattern --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) --metadata-filter stringArray Add a metadata filtering rule --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) --metadata-include stringArray Include metadatas matching pattern --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --metadata-mapper SpaceSepList Program to run to transforming metadata before upload --metadata-set stringArray Add metadata key=value when uploading --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""]) --metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from --metrics-baseurl string Prefix for URLs - leave blank for root --metrics-cert string TLS PEM key (concatenation of certificate and CA certificate) --metrics-client-ca string Client certificate authority to verify clients with --metrics-htpasswd string A htpasswd file - if not provided no authentication is done --metrics-key string TLS PEM Private key --metrics-max-header-bytes int Maximum size of request header (default 4096) --metrics-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") --metrics-pass string Password for authentication --metrics-realm string Realm for authentication --metrics-salt string Password hashing salt (default "dlPL2MqE") --metrics-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) --metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) --metrics-template string User-specified template --metrics-user string User name for authentication --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window Duration Max time diff to be considered the same (default 1ns) --multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi) --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi) --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --netstorage-account string Set the NetStorage account name --netstorage-description string Description of the remote --netstorage-host string Domain+path of NetStorage host to connect to --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) --no-check-certificate Do not verify the server SSL certificate (insecure) --no-check-dest Don't check the destination, copy regardless --no-console Hide console window (supported on Windows only) --no-gzip-encoding Don't set Accept-Encoding: gzip --no-traverse Don't traverse destination file system on copy --no-unicode-normalization Don't normalize unicode characters in filenames --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only) --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) --onedrive-auth-url string Auth server URL --onedrive-av-override Allows download of files the server thinks has a virus --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi) --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-delta If set rclone will use delta listing to implement recursive listings --onedrive-description string Description of the remote --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings --onedrive-hard-delete Permanently delete files on removal --onedrive-hash-type string Specify the hash in use for the backend (default "auto") --onedrive-link-password string Set the password for links created by the link command --onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous") --onedrive-link-type string Set the type of the links created by the link command (default "view") --onedrive-list-chunk int Size of listing chunk (default 1000) --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --onedrive-no-versions Remove all versions on modifying operations --onedrive-region string Choose national cloud region for OneDrive (default "global") --onedrive-root-folder-id string ID of the root folder --onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead --onedrive-token string OAuth Access Token as a JSON blob --onedrive-token-url string Token server url --oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --oos-compartment string Object storage compartment OCID --oos-config-file string Path to OCI config file (default "~/.oci/config") --oos-config-profile string Profile name inside the oci config file (default "Default") --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-timeout Duration Timeout for copy (default 1m0s) --oos-description string Description of the remote --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery --oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --oos-namespace string Object storage namespace --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it --oos-provider string Choose your Auth Provider (default "env_auth") --oos-region string Object storage Region --oos-sse-customer-algorithm string If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm --oos-sse-customer-key string To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to --oos-sse-customer-key-file string To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated --oos-sse-customer-key-sha256 string If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption --oos-sse-kms-key-id string if using your own master key in vault, this header specifies the --oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard") --oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) --opendrive-description string Description of the remote --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password (obscured) --opendrive-username string Username --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") --password-command SpaceSepList Command for supplying password for encrypted configuration --pcloud-auth-url string Auth server URL --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret --pcloud-description string Description of the remote --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") --pcloud-password string Your pcloud password (obscured) --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0") --pcloud-token string OAuth Access Token as a JSON blob --pcloud-token-url string Token server url --pcloud-username string Your pcloud username --pikpak-auth-url string Auth server URL --pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi) --pikpak-client-id string OAuth Client Id --pikpak-client-secret string OAuth Client Secret --pikpak-description string Description of the remote --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-pass string Pikpak password (obscured) --pikpak-root-folder-id string ID of the root folder --pikpak-token string OAuth Access Token as a JSON blob --pikpak-token-url string Token server url --pikpak-trashed-only Only show files that are in the trash --pikpak-upload-concurrency int Concurrency for multipart uploads (default 5) --pikpak-use-trash Send files to the trash instead of deleting permanently (default true) --pikpak-user string Pikpak username --pixeldrain-api-key string API key for your pixeldrain account --pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api") --pixeldrain-description string Description of the remote --pixeldrain-root-folder-id string Root of the filesystem to use (default "me") --premiumizeme-auth-url string Auth server URL --premiumizeme-client-id string OAuth Client Id --premiumizeme-client-secret string OAuth Client Secret --premiumizeme-description string Description of the remote --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token-url string Token server url -P, --progress Show progress during transfer --progress-terminal-title Show progress on the terminal title (requires -P/--progress) --protondrive-2fa string The 2FA code --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") --protondrive-description string Description of the remote --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-username string The username of your proton account --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret --putio-description string Description of the remote --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-token string OAuth Access Token as a JSON blob --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) --qingstor-description string Description of the remote --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-env-auth Get QingStor credentials from runtime --qingstor-secret-access-key string QingStor Secret Access Key (password) --qingstor-upload-concurrency int Concurrency for multipart uploads (default 1) --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to --quatrix-api-key string API key for accessing Quatrix account --quatrix-description string Description of the remote --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-host string Host name of Quatrix account --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) --quatrix-skip-project-folders Skip project folders in operations -q, --quiet Print as little stuff as possible --rc Enable the remote control server --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"]) --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from --rc-baseurl string Prefix for URLs - leave blank for root --rc-cert string TLS PEM key (concatenation of certificate and CA certificate) --rc-client-ca string Client certificate authority to verify clients with --rc-enable-metrics Enable the Prometheus metrics path at the remote control server --rc-files string Path to local files to serve on the HTTP server --rc-htpasswd string A htpasswd file - if not provided no authentication is done --rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s) --rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s) --rc-key string TLS PEM Private key --rc-max-header-bytes int Maximum size of request header (default 4096) --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0") --rc-no-auth Don't require auth for certain methods --rc-pass string Password for authentication --rc-realm string Realm for authentication --rc-salt string Password hashing salt (default "dlPL2MqE") --rc-serve Enable the serving of remote objects --rc-serve-no-modtime Don't read the modification time (can speed things up) --rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s) --rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s) --rc-template string User-specified template --rc-user string User name for authentication --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest") --rc-web-gui Launch WebGUI on localhost --rc-web-gui-force-update Force update to latest version of web gui --rc-web-gui-no-open-browser Don't open the browser automatically --rc-web-gui-update Check and update to latest version of web gui --refresh-times Refresh the modtime of remote files --retries int Retry operations this many times if they fail (default 3) --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --s3-decompress If set this will decompress gzip encoded objects --s3-description string Description of the remote --s3-directory-markers Upload an empty object with a trailing slash when a new directory is created --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-disable-http2 Disable usage of http2 for S3 backends --s3-download-url string Custom endpoint for downloads --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --s3-endpoint string Endpoint for S3 API --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars) --s3-force-path-style If true use path style access if false use virtual hosted style (default true) --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery --s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000) --s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset) --s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto --s3-location-constraint string Location constraint - must be set to match the Region --s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000) --s3-might-gzip Tristate Set this if the backend might gzip objects (default unset) --s3-no-check-bucket If set, don't attempt to check the bucket exists or create it --s3-no-head If set, don't HEAD uploaded objects to check integrity --s3-no-head-object If set, do not do HEAD before GET when getting objects --s3-no-system-metadata Suppress setting and reading of system metadata --s3-profile string Profile to use in the shared credentials file --s3-provider string Choose your S3 provider --s3-region string Region to connect to --s3-requester-pays Enables requester pays option when interacting with S3 bucket --s3-sdk-log-mode Bits Set to debug the SDK (default Off) --s3-secret-access-key string AWS Secret Access Key (password) --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3 --s3-session-token string An AWS session token --s3-shared-credentials-file string Path to the shared credentials file --s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3 --s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data --s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data --s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional) --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key --s3-storage-class string The storage class to use when storing new objects in S3 --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset) --s3-v2-auth If true use v2 authentication --s3-version-at Time Show file versions as they were at the specified time (default off) --s3-version-deleted Show deleted file markers when using versions --s3-versions Include old versions in directory listings --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist --seafile-description string Description of the remote --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library --seafile-library-key string Library password (for encrypted libraries only) (obscured) --seafile-pass string Password (obscured) --seafile-url string URL of seafile host to connect to --seafile-user string User name (usually email address) --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs --sftp-ask-password Allow asking for SFTP password when needed --sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki) --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-connections int Maximum number of SFTP simultaneous connections, 0 for unlimited --sftp-copy-is-hardlink Set to enable server side copies using hardlinks --sftp-description string Description of the remote --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available --sftp-host string SSH host to connect to --sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference --sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference --sftp-key-file string Path to PEM-encoded private key file --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file (obscured) --sftp-key-pem string Raw PEM-encoded private key --sftp-key-use-agent When set forces the usage of the ssh-agent --sftp-known-hosts-file string Optional path to known_hosts file --sftp-macs SpaceSepList Space separated list of MACs (message authentication code) algorithms, ordered by preference --sftp-md5sum-command string The command used to read md5 hashes --sftp-pass string SSH password, leave blank to use ssh-agent (obscured) --sftp-path-override string Override path used by SSH shell commands --sftp-port int SSH port number (default 22) --sftp-pubkey-file string Optional path to public key file --sftp-server-command string Specifies the path or command to run a sftp server on the remote host --sftp-set-env SpaceSepList Environment variables to pass to sftp and commands --sftp-set-modtime Set the modified time on the remote if set (default true) --sftp-sha1sum-command string The command used to read sha1 hashes --sftp-shell-type string The type of SSH shell on remote server, if any --sftp-skip-links Set to skip any symlinks and any other non regular files --sftp-socks-proxy string Socks 5 proxy host --sftp-ssh SpaceSepList Path and arguments to external ssh binary --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp") --sftp-use-fstat If set use fstat instead of stat --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods --sftp-user string SSH username (default "$USER") --sharefile-auth-url string Auth server URL --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-client-id string OAuth Client Id --sharefile-client-secret string OAuth Client Secret --sharefile-description string Description of the remote --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder --sharefile-token string OAuth Access Token as a JSON blob --sharefile-token-url string Token server url --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") --sia-description string Description of the remote --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --size-only Skip based on size only, not modtime or checksum --skip-links Don't warn about skipped symlinks --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true) --smb-host string SMB server hostname to connect to --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s) --smb-pass string SMB password (obscured) --smb-port int SMB port number (default 445) --smb-spn string Service principal name --smb-user string SMB username (default "$USER") --stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s) --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45) --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO) --stats-one-line Make the stats fit on one line --stats-one-line-date Enable --stats-one-line and add current date/time prefix --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes") --storj-access-grant string Access grant --storj-api-key string API key --storj-description string Description of the remote --storj-passphrase string Encryption passphrase --storj-provider string Choose an authentication method (default "existing") --storj-satellite-address string Satellite address (default "us1.storj.io") --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki) --suffix string Suffix to add to changed files --suffix-keep-extension Preserve the extension when using --suffix --sugarsync-access-key-id string Sugarsync Access Key ID --sugarsync-app-id string Sugarsync App ID --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id --sugarsync-description string Description of the remote --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key --sugarsync-refresh-token string Sugarsync refresh token --sugarsync-root-id string Sugarsync root id --sugarsync-user string Sugarsync user --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) --swift-auth string Authentication URL for server (OS_AUTH_URL) --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked (default 5Gi) --swift-description string Description of the remote --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --swift-env-auth Get swift credentials from environment variables in standard OpenStack form --swift-fetch-until-empty-page When paginating, always fetch unless we received an empty page --swift-key string API key or password (OS_PASSWORD) --swift-leave-parts-on-error If true avoid calling abort upload on a failure --swift-no-chunk Don't chunk files during streaming upload --swift-no-large-objects Disable support for static and dynamic large objects --swift-partial-page-fetch-threshold int When paginating, fetch if the current page is within this percentage of the limit --swift-region string Region name - optional (OS_REGION_NAME) --swift-storage-policy string The storage policy to use when creating a new container --swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-use-segments-container Tristate Choose destination for large object segments (default unset) --swift-user string User name to log in (OS_USERNAME) --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID) --syslog Use Syslog for logging --syslog-facility string Facility for syslog, e.g. KERN,USER (default "DAEMON") --temp-dir string Directory rclone will use for temporary files (default "/tmp") --timeout Duration IO idle timeout (default 5m0s) --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --track-renames When synchronizing, track file renames and do a server-side move if possible --track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash") --transfers int Number of file transfers to run in parallel (default 4) --ulozto-app-token string The application token identifying the app. An app API key can be either found in the API --ulozto-description string Description of the remote --ulozto-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --ulozto-list-page-size int The size of a single page for list commands. 1-500 (default 500) --ulozto-password string The password for the user (obscured) --ulozto-root-folder-slug string If set, rclone will use this folder as the root folder for all operations. For example, --ulozto-username string The username of the principal to operate as --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") --union-description string Description of the remote --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams -u, --update Skip files that are newer on the destination --uptobox-access-token string Your access token --uptobox-description string Description of the remote --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-private Set to make uploaded files private --use-cookies Enable session cookiejar --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0") -v, --verbose count Print lots more stuff (repeat for more) -V, --version Print the version number --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token --webdav-description string Description of the remote --webdav-encoding string The encoding for the backend --webdav-headers CommaSepList Set HTTP headers for all transactions --webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi) --webdav-owncloud-exclude-mounts Exclude ownCloud mounted storages --webdav-owncloud-exclude-shares Exclude ownCloud shares --webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --webdav-pass string Password (obscured) --webdav-unix-socket string Path to a unix domain socket to dial to, instead of opening a TCP connection directly --webdav-url string URL of http host to connect to --webdav-user string User name --webdav-vendor string Name of the WebDAV site/service/software you are using --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret --yandex-description string Description of the remote --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-spoof-ua Set the user agent to match an official version of the yandex disk client. May help with upload performance (default true) --yandex-token string OAuth Access Token as a JSON blob --yandex-token-url string Token server url --zoho-auth-url string Auth server URL --zoho-client-id string OAuth Client Id --zoho-client-secret string OAuth Client Secret --zoho-description string Description of the remote --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-region string Zoho region to connect to --zoho-token string OAuth Access Token as a JSON blob --zoho-token-url string Token server url ``` ## See Also * [rclone about](/commands/rclone_about/) - Get quota information from the remote. * [rclone authorize](/commands/rclone_authorize/) - Remote authorization. * [rclone backend](/commands/rclone_backend/) - Run a backend-specific command. * [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths. * [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout. * [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match. * [rclone checksum](/commands/rclone_checksum/) - Checks the files in the destination against a SUM file. * [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible. * [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files. * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files. * [rclone copyurl](/commands/rclone_copyurl/) - Copy the contents of the URL supplied content to dest:path. * [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of an encrypted remote. * [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names. * [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate filenames and delete/rename them. * [rclone delete](/commands/rclone_delete/) - Remove the files in path. * [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote. * [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied. * [rclone gitannex](/commands/rclone_gitannex/) - Speaks with git-annex over stdin/stdout. * [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path. * [rclone link](/commands/rclone_link/) - Generate public link to file/folder. * [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file and defined in environment variables. * [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path. * [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path. * [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing. * [rclone lsjson](/commands/rclone_lsjson/) - List directories and objects in the path in JSON format. * [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path. * [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path. * [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist. * [rclone mount](/commands/rclone_mount/) - Mount the remote as file system on a mountpoint. * [rclone move](/commands/rclone_move/) - Move files from source to dest. * [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest. * [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface. * [rclone nfsmount](/commands/rclone_nfsmount/) - Mount the remote as file system on a mountpoint. * [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone config file. * [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. * [rclone rc](/commands/rclone_rc/) - Run a command against a running rclone. * [rclone rcat](/commands/rclone_rcat/) - Copies standard input to file on remote. * [rclone rcd](/commands/rclone_rcd/) - Run rclone listening to remote control commands only. * [rclone rmdir](/commands/rclone_rmdir/) - Remove the empty directory at path. * [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path. * [rclone selfupdate](/commands/rclone_selfupdate/) - Update the rclone binary. * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. * [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote. * [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path. * [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path. * [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only. * [rclone test](/commands/rclone_test/) - Run a test command * [rclone touch](/commands/rclone_touch/) - Create new file or change file modification time. * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone version](/commands/rclone_version/) - Show the version number.