Version v1.70.0

This commit is contained in:
Nick Craig-Wood
2025-06-17 17:52:35 +01:00
parent 92fea7eb1b
commit 9d464e8e9a
65 changed files with 14873 additions and 1444 deletions

View File

@@ -37,6 +37,8 @@ rclone [flags]
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
@@ -60,6 +62,7 @@ rclone [flags]
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-copy-blob Whether to use the Copy Blob API when copying to the same storage account (default true)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -72,6 +75,7 @@ rclone [flags]
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
--azurefiles-description string Description of the remote
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -86,6 +90,7 @@ rclone [flags]
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-az Use Azure CLI tool az for authentication
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -160,12 +165,14 @@ rclone [flags]
--chunker-remote string Remote to chunk/unchunk
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--cloudinary-adjust-media-files-extensions Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems (default true)
--cloudinary-api-key string Cloudinary API Key
--cloudinary-api-secret string Cloudinary API Secret
--cloudinary-cloud-name string Cloudinary Environment Name
--cloudinary-description string Description of the remote
--cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
--cloudinary-media-extensions stringArray Cloudinary supported media extensions (default 3ds,3g2,3gp,ai,arw,avi,avif,bmp,bw,cr2,cr3,djvu,dng,eps3,fbx,flif,flv,gif,glb,gltf,hdp,heic,heif,ico,indd,jp2,jpe,jpeg,jpg,jxl,jxr,m2ts,mov,mp4,mpeg,mts,mxf,obj,ogv,pdf,ply,png,psd,svg,tga,tif,tiff,ts,u3ma,usdz,wdp,webm,webp,wmv)
--cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
--cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
@@ -204,6 +211,10 @@ rclone [flags]
--disable string Disable a comma separated list of features (use --disable help to see a list)
--disable-http-keep-alives Disable HTTP keep-alives and use each connection once
--disable-http2 Disable HTTP/2 in the global transport
--doi-description string Description of the remote
--doi-doi string The DOI or the doi.org URL
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
--doi-provider string DOI provider
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -255,7 +266,6 @@ rclone [flags]
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
--dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -265,11 +275,14 @@ rclone [flags]
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-export-formats CommaSepList Comma separated list of preferred formats for exporting files (default html,md)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-show-all-exports Show all exportable files in listings
--dropbox-skip-exports Skip exportable files in all listings
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
-n, --dry-run Do a trial run with no permanent changes
@@ -298,6 +311,9 @@ rclone [flags]
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
--filescom-api-key string The API key used to authenticate with Files.com
@@ -364,7 +380,6 @@ rclone [flags]
--gofile-list-chunk int Number of items to list in each call (default 1000)
--gofile-root-folder-id string ID of the root folder
--gphotos-auth-url string Auth server URL
--gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -380,6 +395,7 @@ rclone [flags]
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
--gphotos-token string OAuth Access Token as a JSON blob
--gphotos-token-url string Token server url
--hash-filter string Partition filenames by hash k/n or randomly @/n
--hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
--hasher-description string Description of the remote
--hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
@@ -449,6 +465,8 @@ rclone [flags]
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-item-derive Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload (default true)
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
@@ -476,6 +494,7 @@ rclone [flags]
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
@@ -491,7 +510,7 @@ rclone [flags]
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-format Bits Comma separated list of log format options (default date,time)
--log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--low-level-retries int Number of low level retries to do (default 10)
@@ -512,6 +531,8 @@ rclone [flags]
--mailru-user string User name (usually email)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-buffer-memory SizeSuffix If set, don't allocate more than this amount of memory as buffers (default off)
--max-connections int Maximum number of simultaneous backend API connections, 0 for unlimited
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
@@ -553,6 +574,7 @@ rclone [flags]
--metrics-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--metrics-template string User-specified template
--metrics-user string User name for authentication
--metrics-user-from-header string User name from a defined HTTP header
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window Duration Max time diff to be considered the same (default 1ns)
@@ -560,6 +582,7 @@ rclone [flags]
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--netstorage-account string Set the NetStorage account name
--netstorage-description string Description of the remote
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -601,6 +624,7 @@ rclone [flags]
--onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--onedrive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default off)
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Specify compartment OCID, if you need to list buckets
@@ -626,6 +650,7 @@ rclone [flags]
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-access string Files and folders will be uploaded with this access permission (default private) (default "private")
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -736,6 +761,7 @@ rclone [flags]
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
--rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
@@ -760,6 +786,8 @@ rclone [flags]
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
--s3-ibm-api-key string IBM API Key to be used to obtain IAM token
--s3-ibm-resource-instance-id string IBM service instance id
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
@@ -780,6 +808,7 @@ rclone [flags]
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sign-accept-encoding Tristate Set if rclone should include Accept-Encoding as part of the signature (default unset)
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
@@ -796,6 +825,7 @@ rclone [flags]
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-use-unsigned-payload Tristate Whether to use an unsigned payload in PutObject (default unset)
--s3-use-x-id Tristate Set if rclone should add x-id URL parameters (default unset)
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-version-deleted Show deleted file markers when using versions
@@ -822,6 +852,7 @@ rclone [flags]
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
--sftp-http-proxy string URL for HTTP CONNECT proxy
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
@@ -877,6 +908,7 @@ rclone [flags]
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
--smb-use-kerberos Use Kerberos authentication
--smb-user string SMB username (default "$USER")
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
@@ -965,7 +997,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect
@@ -1017,6 +1049,7 @@ rclone [flags]
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone convmv](/commands/rclone_convmv/) - Convert file and directory names in place.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files.
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files.
* [rclone copyurl](/commands/rclone_copyurl/) - Copy the contents of the URL supplied content to dest:path.

View File

@@ -14,13 +14,18 @@ Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
```
rclone authorize [flags]
rclone authorize <fs name> [base64_json_blob | client_id client_secret] [flags]
```
## Options

View File

@@ -93,6 +93,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -129,6 +130,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -74,6 +74,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -96,6 +96,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -82,6 +82,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -123,6 +123,7 @@ rclone config create name type [key value]* [flags]
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured
--no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue

View File

@@ -21,12 +21,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
changing passwords programatically you can use the environment
changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
easier if you don't mind the unecrypted config file being on the disk
easier if you don't mind the unencrypted config file being on the disk
briefly.

View File

@@ -123,6 +123,7 @@ rclone config update name [key value]+ [flags]
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured
--no-output Don't provide any output
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue

View File

@@ -0,0 +1,400 @@
---
title: "rclone convmv"
description: "Convert file and directory names in place."
versionIntroduced: v1.70
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/convmv/ and as part of making a release run "make commanddocs"
---
# rclone convmv
Convert file and directory names in place.
## Synopsis
convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
| Command | Description |
|------|------|
| `--name-transform prefix=XXXX` | Prepends XXXX to the file name. |
| `--name-transform suffix=XXXX` | Appends XXXX to the file name after the extension. |
| `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. |
| `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. |
| `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. |
| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. |
| `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. |
| `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. |
| `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. |
| `--name-transform base64encode` | Encodes the file name in Base64. |
| `--name-transform base64decode` | Decodes a Base64-encoded file name. |
| `--name-transform encoder=ENCODING` | Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). |
| `--name-transform decoder=ENCODING` | Decodes the file name from the specified encoding. |
| `--name-transform charmap=MAP` | Applies a character mapping transformation. |
| `--name-transform lowercase` | Converts the file name to lowercase. |
| `--name-transform uppercase` | Converts the file name to UPPERCASE. |
| `--name-transform titlecase` | Converts the file name to Title Case. |
| `--name-transform ascii` | Strips non-ASCII characters. |
| `--name-transform url` | URL-encodes the file name. |
| `--name-transform nfc` | Converts the file name to NFC Unicode normalization form. |
| `--name-transform nfd` | Converts the file name to NFD Unicode normalization form. |
| `--name-transform nfkc` | Converts the file name to NFKC Unicode normalization form. |
| `--name-transform nfkd` | Converts the file name to NFKD Unicode normalization form. |
| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform |
Conversion modes:
```
none
nfc
nfd
nfkc
nfkd
replace
prefix
suffix
suffix_keep_extension
trimprefix
trimsuffix
index
date
truncate
base64encode
base64decode
encoder
decoder
ISO-8859-1
Windows-1252
Macintosh
charmap
lowercase
uppercase
titlecase
ascii
url
regex
command
```
Char maps:
```
IBM-Code-Page-037
IBM-Code-Page-437
IBM-Code-Page-850
IBM-Code-Page-852
IBM-Code-Page-855
Windows-Code-Page-858
IBM-Code-Page-860
IBM-Code-Page-862
IBM-Code-Page-863
IBM-Code-Page-865
IBM-Code-Page-866
IBM-Code-Page-1047
IBM-Code-Page-1140
ISO-8859-1
ISO-8859-2
ISO-8859-3
ISO-8859-4
ISO-8859-5
ISO-8859-6
ISO-8859-7
ISO-8859-8
ISO-8859-9
ISO-8859-10
ISO-8859-13
ISO-8859-14
ISO-8859-15
ISO-8859-16
KOI8-R
KOI8-U
Macintosh
Macintosh-Cyrillic
Windows-874
Windows-1250
Windows-1251
Windows-1252
Windows-1253
Windows-1254
Windows-1255
Windows-1256
Windows-1257
Windows-1258
X-User-Defined
```
Encoding masks:
```
Asterisk
BackQuote
BackSlash
Colon
CrLf
Ctl
Del
Dollar
Dot
DoubleQuote
Exclamation
Hash
InvalidUtf8
LeftCrLfHtVt
LeftPeriod
LeftSpace
LeftTilde
LtGt
None
Percent
Pipe
Question
Raw
RightCrLfHtVt
RightPeriod
RightSpace
Semicolon
SingleQuote
Slash
SquareBracket
```
Examples:
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
// Output: STORIES/THE QUICK BROWN FOX!.TXT
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
// Output: stories/The Slow Brown Turtle!.txt
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
```
```
rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
// Output: stories/The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
// Output: stories/The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
// Output: stories/The Quick Brown Fox!
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
// Output: OLD_stories/OLD_The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
```
```
rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
// Output: stories/The Quick Brown Fox A Memoir draft.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
// Output: stories/The Quick Brown 🦊 Fox
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
// Output: stories/The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250617
```
```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
```
Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
The `--name-transform` flag is also available in `sync`, `copy`, and `move`.
# Files vs Directories ##
By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the `--name-transform`
| Tag | Effect |
|------|------|
| `file` | Only transform the leaf name of files (DEFAULT) |
| `dir` | Only transform name of directories - these may appear anywhere in the path |
| `all` | Transform the entire path for files and directories |
This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`
Note that `--name-transform` may not add path separators `/` to the name. This will cause an error.
# Ordering and Conflicts ##
* Transformations will be applied in the order specified by the user.
* If the `file` tag is in use (the default) then only the leaf name of files will be transformed.
* If the `dir` tag is in use then directories anywhere in the path will be transformed
* If the `all` tag is in use then directories and files anywhere in the path will be transformed
* Each transformation will be run one path segment at a time.
* If a transformation adds a `/` or ends up with an empty path segment then that will be an error.
* It is up to the user to put the transformations in a sensible order.
* Conflicting transformations, such as `prefix` followed by `trimprefix` or `nfc` followed by `nfd`, are possible.
* Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the
user, allowing for intentional use cases (e.g., trimming one prefix before adding another).
* Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using `--dry-run` before execution.
# Race Conditions and Non-Deterministic Behavior ##
Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
* To minimize risks, users should:
* Carefully review transformations that may introduce conflicts.
* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name.
* Consider disabling concurrency with `--transfers=1` if necessary.
* Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`.
```
rclone convmv dest:path --name-transform XXX [flags]
```
## Options
```
--create-empty-src-dirs Create empty source dirs on destination after move
--delete-empty-src-dirs Delete empty source dirs after move
-h, --help help for convmv
```
Options shared with other commands are described next.
See the [global flags page](/flags/) for global options not listed here.
### Copy Options
Flags for anything which can copy a file
```
--check-first Do all the checks before starting transfers
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
--compare-dest stringArray Include additional server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-chunk-size SizeSuffix Chunk size for multi-thread downloads / uploads, if not set by filesystem (default 64Mi)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
--no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
### Important Options
Important flags useful for most commands
```
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
```
### Filter Options
Flags for filtering directory listings
```
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--metadata-exclude stringArray Exclude metadatas matching pattern
--metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
--metadata-filter stringArray Add a metadata filtering rule
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
```
### Listing Options
Flags for listing directories
```
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -116,6 +116,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -152,6 +153,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -36,6 +36,8 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
@@ -79,6 +81,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -115,6 +118,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -17,7 +17,7 @@ Setting `--auto-filename` will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
With `--auto-filename-header` in addition, if a specific filename is
With `--header-filename` in addition, if a specific filename is
set in HTTP headers, it will be used instead of the name from the URL.
With `--print-filename` in addition, the resulting file name will be
printed.
@@ -28,7 +28,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
## Troublshooting
## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:

View File

@@ -99,6 +99,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -75,6 +75,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -36,6 +36,7 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool
* crc32
* sha256
* sha512
Then
@@ -74,6 +75,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -70,6 +70,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -81,6 +81,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -178,6 +178,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -150,6 +150,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -71,6 +71,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -58,6 +58,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -571,11 +571,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -900,6 +900,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
```
@@ -951,6 +990,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -980,6 +1020,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -91,6 +91,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -127,6 +128,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -82,6 +82,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -118,6 +119,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -98,6 +98,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -572,11 +572,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -901,6 +901,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
```
@@ -957,6 +996,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -986,6 +1026,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -15,6 +15,9 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
implement this command directly, in which case `--checkers` will be ignored.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View File

@@ -126,7 +126,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--rc-user` and `--rc-pass` flags.
If no static users are configured by either of the above methods, and client
Alternatively, you can have the reverse proxy manage authentication and use the
username provided in the configured header with `--user-from-header` (e.g., `--rc---user-from-header=x-remote-user`).
Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -190,6 +194,7 @@ Flags to control the Remote Control API
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
--rc-user-from-header string User name from a defined HTTP header
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui

View File

@@ -134,11 +134,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -463,6 +463,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
```
@@ -500,6 +539,7 @@ rclone serve dlna remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -527,6 +567,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -146,11 +146,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -475,6 +475,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
```
@@ -531,6 +570,7 @@ rclone serve docker [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -560,6 +600,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -127,11 +127,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -456,6 +456,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -577,6 +616,7 @@ rclone serve ftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -604,6 +644,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -128,7 +128,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
If no static users are configured by either of the above methods, and client
Alternatively, you can have the reverse proxy manage authentication and use the
username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -245,11 +249,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -574,6 +578,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -664,19 +707,19 @@ rclone serve http remote:path [flags]
## Options
```
--addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
--addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
--client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string Path to TLS PEM private key file
--key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -694,6 +737,7 @@ rclone serve http remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
--user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -704,6 +748,7 @@ rclone serve http remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -731,6 +776,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -7,8 +7,6 @@ versionIntroduced: v1.65
---
# rclone serve nfs
*Not available in Windows.*
Serve the remote as an NFS mount
## Synopsis
@@ -55,7 +53,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
@@ -79,6 +77,12 @@ Where `$PORT` is the same port number used in the `serve nfs` command
and `$HOSTNAME` is the network address of the machine that `serve nfs`
was run on.
If `--vfs-metadata-extension` is in use then for the `--nfs-cache-type disk`
and `--nfs-cache-type cache` the metadata files will have the file
handle of their parent file suffixed with `0x00, 0x00, 0x00, 0x01`.
This means they can be looked up directly from the parent file handle
is desired.
This command is only available on Unix platforms.
## VFS - Virtual File System
@@ -178,11 +182,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -507,6 +511,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
```
@@ -543,6 +586,7 @@ rclone serve nfs remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -570,6 +614,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -162,7 +162,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
If no static users are configured by either of the above methods, and client
Alternatively, you can have the reverse proxy manage authentication and use the
username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -191,16 +195,16 @@ rclone serve restic remote:path [flags]
## Options
```
--addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
--addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
--cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
--client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string Path to TLS PEM private key file
--key string TLS PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -211,6 +215,7 @@ rclone serve restic remote:path [flags]
--server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout
--user string User name for authentication
--user-from-header string User name from a defined HTTP header
```
See the [global flags page](/flags/) for global options not listed here.

View File

@@ -27,7 +27,7 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
access.
Please note that some clients may require HTTPS endpoints. See [the
SSL docs](#ssl-tls) for more information.
SSL docs](#tls-ssl) for more information.
This command uses the [VFS directory cache](#vfs-virtual-file-system).
All the functionality will work with `--vfs-cache-mode off`. Using
@@ -82,7 +82,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
Note that setting `disable_multipart_uploads = true` is to work around
Note that setting `use_multipart_uploads = false` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs
@@ -154,7 +154,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
If no static users are configured by either of the above methods, and client
Alternatively, you can have the reverse proxy manage authentication and use the
username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -334,11 +338,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -663,6 +667,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
```
@@ -672,22 +715,22 @@ rclone serve s3 remote:path [flags]
## Options
```
--addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
--addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
--client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
--file-perms FileMode File permissions (default 666)
--force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
--force-path-style If true use path style access if false use virtual hosted style (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for s3
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string Path to TLS PEM private key file
--key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -705,6 +748,7 @@ rclone serve s3 remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
--user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -715,6 +759,7 @@ rclone serve s3 remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -742,6 +787,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -170,11 +170,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -499,6 +499,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -620,6 +659,7 @@ rclone serve sftp remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -647,6 +687,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -171,7 +171,11 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
If no static users are configured by either of the above methods, and client
Alternatively, you can have the reverse proxy manage authentication and use the
username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`).
Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.
If either of the above authentication methods is not configured and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
@@ -288,11 +292,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -617,6 +621,45 @@ _WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## VFS Metadata
If you use the `--vfs-metadata-extension` flag you can get the VFS to
expose files which contain the [metadata](/docs/#metadata) as a JSON
blob. These files will not appear in the directory listing, but can be
`stat`-ed and opened and once they have been they **will** appear in
directory listings until the directory cache expires.
Note that some backends won't create metadata unless you pass in the
`--metadata` flag.
For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata`
we get
```
$ ls -l /mnt/
total 1048577
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
$ cat /mnt/1G.metadata
{
"atime": "2025-03-04T17:34:22.317069787Z",
"btime": "2025-03-03T16:03:37.708253808Z",
"gid": "1000",
"mode": "100664",
"mtime": "2025-03-03T16:03:39.640238323Z",
"uid": "1000"
}
$ ls -l /mnt/
total 1048578
-rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G
-rw-rw-r-- 1 user user 185 Mar 3 16:03 1G.metadata
```
If the file has no metadata it will be returned as `{}` and if there
is an error reading the metadata the error will be returned as
`{"error":"error string"}`.
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -707,12 +750,12 @@ rclone serve webdav remote:path [flags]
## Options
```
--addr stringArray IPaddress:Port, :Port or [unix://]/path/to/socket to bind server to (default [127.0.0.1:8080])
--addr stringArray IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
--client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
@@ -721,7 +764,7 @@ rclone serve webdav remote:path [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string Path to TLS PEM private key file
--key string TLS PEM Private key
--link-perms FileMode Link permissions (default 666)
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
@@ -739,6 +782,7 @@ rclone serve webdav remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask FileMode Override the permission bits set by the filesystem (not supported on Windows) (default 002)
--user string User name for authentication
--user-from-header string User name from a defined HTTP header
--vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
@@ -749,6 +793,7 @@ rclone serve webdav remote:path [flags]
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
--vfs-metadata-extension string Set the extension to read metadata from
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
@@ -776,6 +821,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -61,6 +61,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -56,6 +56,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -147,6 +147,7 @@ Flags for anything which can copy a file
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 256Mi)
--multi-thread-streams int Number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--name-transform stringArray Transform paths during the copy process
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-dir-modtime Don't update directory modification times
@@ -171,6 +172,7 @@ Flags used for sync commands
--delete-during When synchronizing, delete files during transfer
--fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
--list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--suffix string Suffix to add to changed files
@@ -202,6 +204,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -19,6 +19,7 @@ unless `--no-create` or `--recursive` is provided.
If `--recursive` is used then recursively sets the modification
time on all existing files that is found under the path. Filters are supported,
and you can test with the `--dry-run` or the `--interactive`/`-i` flag.
This will touch `--transfers` files concurrently.
If `--timestamp` is used then sets the modification time to that
time instead of the current time. Times may be specified as one of:
@@ -71,6 +72,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -81,6 +81,7 @@ Flags for filtering directory listings
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--hash-filter string Partition filenames by hash k/n or randomly @/n
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)

View File

@@ -46,6 +46,9 @@ Or
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
If you supply the --deps flag then rclone will print a list of all the
packages it depends on and their versions along with some other
information about the build.
```
@@ -56,6 +59,7 @@ rclone version [flags]
```
--check Check for new version
--deps Show the Go dependencies
-h, --help help for version
```