First you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file .rclone.conf in your home directory by default. (You can use the --config option to choose a different config file.)
+
First you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.)
The easiest way to make the config is to run rclone with the config option:
@@ -203,9 +236,12 @@ rclone --dry-run --min-size 100M delete remote:path
rclone check
Checks the files in the source and destination match.
Synopsis
-
Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.
-
--size-only may be used to only compare the sizes, not the MD5SUMs.
+
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.
+
If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
+
If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
rclone check source:path dest:path
+
Options
+
--download Check by downloading rather than with hash.
rclone ls
List all the objects in the path with size and path.
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
+
Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.
rclone cat remote:path
+
Options
+
--count int Only print N characters. (default -1)
+ --discard Discard the output instead of printing.
+ --head int Only print the first N characters.
+ --offset int Start printing at offset N (or from end if -ve).
+ --tail int Only print the last N characters.
rclone copyto
Copy files from source to dest, skipping already copied
Synopsis
@@ -341,9 +384,21 @@ if src is directory
see copy command for full details
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
rclone copyto source:path dest:path
+
rclone cryptcheck
+
Cryptcheck checks the integritity of a crypted remote.
+
Synopsis
+
rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
+
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
+
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
After it has run it will log the status of the encryptedremote:.
+
rclone cryptcheck remote:path cryptedremote:path
rclone genautocomplete
Output bash completion script for rclone.
-
Synopsis
+
Synopsis
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg
sudo rclone genautocomplete
@@ -353,36 +408,39 @@ if src is directory
rclone genautocomplete [output_file]
rclone gendocs
Output markdown docs for rclone to the directory supplied.
-
Synopsis
+
Synopsis
This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory
rclone listremotes
List all the remotes in the config file.
-
Synopsis
+
Synopsis
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
rclone listremotes
-
Options
+
Options
-l, --long Show the type as well as names.
rclone mount
Mount the remote as a mountpoint. EXPERIMENTAL
-
Synopsis
+
Synopsis
rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config. Check it works with rclone ls etc.
-
Start the mount like this
+
Start the mount like this (note the & on the end to put rclone in the background).
rclone mount remote:path/to/files /path/to/local/mount &
Stop the mount with
fusermount -u /path/to/local/mount
+
Or if that fails try
+
fusermount -z -u /path/to/local/mount
Or with OS X
-
umount -u /path/to/local/mount
+
umount /path/to/local/mount
Limitations
-
This can only write files seqentially, it can only seek when reading.
-
Rclone mount inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.
-
The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path.
+
This can only write files seqentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount.
+
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
Only supported on Linux, FreeBSD and OS X at the moment.
rclone mount vs rclone sync/copy
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.
+
Filters
+
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
Bugs
All the remotes should work for read, but some may not for write
@@ -399,7 +457,7 @@ if src is directory
Move directories
rclone mount remote:path /path/to/mountpoint
-
Options
+
Options
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
@@ -416,7 +474,7 @@ if src is directory
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
rclone moveto
Move file or directory from source to dest.
-
Synopsis
+
Synopsis
If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.
So
@@ -431,9 +489,14 @@ if src is directory
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
Important: Since this can cause data loss, test first with the --dry-run flag.
rclone moveto source:path dest:path
+
rclone obscure
+
Obscure password for use in the rclone.conf
+
Synopsis
+
Obscure password for use in the rclone.conf
+
rclone obscure password
rclone rmdirs
Remove any empty directoryies under the path.
-
Synopsis
+
Synopsis
This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path
@@ -462,26 +525,42 @@ if src is directory
If you are using the root directory on its own then don't quote it (see #464 for why), eg
rclone copy E:\ remote:backup
Server Side Copy
-
Drive, S3, Dropbox, Swift and Google Cloud Storage support server side copy.
+
Most remotes (but not all - see the overview) support server side copy.
This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.
Eg
rclone copy s3:oldbucket s3:newbucket
Will copy the contents of oldbucket to newbucket without downloading and re-uploading.
-
Remotes which don't support server side copy (eg local) will download and re-upload in this case.
-
Server side copies are used with sync and copy and will be identified in the log when using the -v flag.
+
Remotes which don't support server side copy will download and re-upload in this case.
+
Server side copies are used with sync and copy and will be identified in the log when using the -v flag. The may also be used with move if the remote doesn't support server side move.
Server side copies will only be attempted if the remote names are the same.
This can be used when scripting to make aged backups efficiently, eg
Rclone has a number of options to control its behaviour.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However a suffix of b for bytes, k for kBytes, M for MBytes and G for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
-
--bwlimit=SIZE
-
Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is 0 which means to not limit bandwidth.
+
--backup-dir=DIR
+
When using sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.
+
If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
+
The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.
will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.
+
If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date.
+
--bwlimit=BANDWIDTH_SPEC
+
This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.
+
Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0 which means to not limit bandwidth.
For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
-
This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.
+
It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as "HH:MM,BANDWIDTH HH:MM,BANDWITH...".
+
An example of a typical timetable to avoid link saturation during daytime working hours could be:
In this example, the transfer bandwidth will be set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
+
Bandwidth limits only apply to the data transfer. The don't apply to the bandwith of the directory listings etc.
Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M parameter for rclone.
+
--buffer-size=SIZE
+
Use this sized buffer to speed up file transfers. Each --transfer will use this much memory for buffering.
+
Set to 0 to disable the buffering for the minimum memory use.
--checkers=N
The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg s3, swift, dropbox) this can take a significant amount of time so they are run in parallel.
Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.
When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.
--config=CONFIG_FILE
-
Specify the location of the rclone config file. Normally this is in your home directory as a file called .rclone.conf. If you run rclone -h and look at the help for the --config option you will see where the default location is for you. Use this flag to override the config location, eg rclone --config=".myconfig" .config.
+
Specify the location of the rclone config file.
+
Normally the config file is in your home directory as a file called .config/rclone/rclone.conf (or .rclone.conf if created with an older version). If $XDG_CONFIG_HOME is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf
+
If you run rclone -h and look at the help for the --config option you will see where the default location is for you.
+
Use this flag to override the config location, eg rclone --config=".myconfig" .config.
--contimeout=TIME
Set the connection timeout. This should be in go time format which looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.
The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default.
Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.
-n, --dry-run
Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination.
+
--ignore-checksum
+
Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't.
+
You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data.
--ignore-existing
Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.
While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).
--log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.
+
--log-level LEVEL
+
This sets the log level for rclone. The default log level is INFO.
+
DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.
+
INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default.
+
NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.
+
ERROR is equivalent to -q. It only output error messages.
--low-level-retries NUMBER
This controls the number of low level retries rclone does.
A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag.
Data transfer volume will still be reported in bytes.
The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
The default is bytes.
+
--suffix=SUFFIX
+
This is for use with --backup-dir only. If this isn't set then --backup-dir will move files with their original name. If it is set then the files will have SUFFIX added on to them.
+
See --backup-dir for more info.
+
--syslog
+
On capable OSes (not Windows or Plan9) send all log output to syslog.
+
This can be useful for running rclone in script or rclone mount.
+
--syslog-facility string
+
If using --syslog this sets the syslog facility (eg KERN, USER). See man syslog for a list of possible facilities. The default facility is DAEMON.
+
--track-renames
+
By default rclone doesn't not keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
+
If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync, copy, and move operations and perform renaming server-side.
+
Files will be matched by size and hash - if both match then a rename will be considered.
+
If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console.
+
Note that --track-renames is incompatible with --no-traverse and that it uses extra memory to keep track of all the rename candidates.
+
Note also that --track-renames is incompatible with --delete-before and will select --delete-after instead of --delete-during.
--delete-(before,during,after)
This option allows you to specify when files on your destination are deleted when you sync folders.
-
Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses extra memory as it has to store the source listing before proceeding.
-
Specifying --delete-during (default value) will delete files while checking and uploading files. This is usually the fastest option. Currently this works the same as --delete-after but it may change in the future.
-
Specifying --delete-after will delay deletion of files until all new/updated files have been successfully transfered.
+
Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.
+
Specifying --delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory.
+
Specifying --delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transfered. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed sucessfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors.
--timeout=TIME
This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.
If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.
On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remoes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.
-
-v, --verbose
-
If you set this flag, rclone will become very verbose telling you about every file it considers and transfers.
-
Very useful for debugging.
+
-v, -vv, --verbose
+
With -v rclone will tell you about each file that is transferred and a small number of significant events.
+
With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.
-V, --version
Prints the version number
Configuration Encryption
@@ -636,7 +742,7 @@ export RCLONE_CONFIG_PASS
This option defaults to false.
This should be used only for testing.
--no-traverse
-
The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands.
+
The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync.
If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.
However if you are copying a large number of files, escpecially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse.
It can also be used to reduce the memory usage of rclone when copying - rclone --no-traverse copy src dst won't load either the source or destination listings into memory so will use the minimum amount of memory.
rclone has 3 levels of logging, Error, Info and Debug.
-
By default rclone logs Error and Info to standard error and Debug to standard output. This means you can redirect standard output and standard error to different places.
-
By default rclone will produce Error and Info level messages.
+
rclone has 4 levels of logging, Error, Notice, Info and Debug.
+
By default rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls).
+
By default rclone will produce Error and Notice level messages.
If you use the -q flag, rclone will only produce Error messages.
-
If you use the -v flag, rclone will produce Error, Info and Debug messages.
+
If you use the -v flag, rclone will produce Error, Notice and Info messages.
+
If you use the -vv flag, rclone will produce Error, Notice, Info and Debug messages.
+
You can also control the log levels with the --log-level flag.
If you use the --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with standard error to FILE.
+
If you use the --syslog flag then rclone will log to syslog and the --syslog-facility control which facility it uses.
+
Rclone prefixes all log messages with their level in capitals, eg INFO which makes it easy to grep the log file for different kinds of information.
Exit Code
-
If any errors occurred during the command, rclone with an exit code of 1. This allows scripts to detect when rclone operations have failed.
+
If any errors occurred during the command, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.
During the startup phase rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.
-
When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were no transfers with errors remaining. For every error counted there will be a high priority log message (visibile with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
+
When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visibile with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
+
Environment Variables
+
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
+
Options
+
Every option in rclone can have its default set by environment variable.
+
To find the name of the environment variable, first take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.
+
For example to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.
+
Or to always use the trash in drive --drive-use-trash, set RCLONE_DRIVE_USE_TRASH=true.
+
The same parser is used for the options and the environment variables so they take exactly the same form.
+
Config file
+
You can set defaults for values in the config file on an individual remote basis. If you want to use this feature, you will need to discover the name of the config items that you want. The easiest way is to run through rclone config by hand, then look in the config file to see what the values are (the config file can be found by looking at the help for --config in rclone help).
+
To find the name of the environment variable, you need to set, take RCLONE_ + name of remote + _ + name of config file option and make it all uppercase.
+
For example to configure an S3 remote named mys3: without a config file (using unix ways of setting environment variables):
Note that if you want to create a remote using environment variables you must create the ..._TYPE variable as above.
+
Other environment variables
+
+
RCLONE_CONFIG_PASS` set to contain your config file password (see Configuration Encryption section)
+
HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof).
+
+
HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
+
The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed.
+
+
Configuring rclone on a remote / headless machine
Some of the configurations (those involving oauth2) require an Internet connected web browser.
If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.
@@ -1024,6 +1163,14 @@ user2/stuff
R/W
+
SFTP
+
-
+
Yes
+
Depends
+
No
+
-
+
+
The local filesystem
All
Yes
@@ -1043,7 +1190,7 @@ The hashes are used when transferring data as an integrity check and can be spec
Case Insensitive
If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.
This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
-
The local filesystem may or may not be case sensitive depending on OS.
+
The local filesystem and SFTP may or may not be case sensitive depending on OS.
Windows - usually case insensitive, though case is preserved
OSX - usually case insensitive, though it is possible to format case sensitive
@@ -1124,7 +1271,7 @@ The hashes are used when transferring data as an integrity check and can be spec
@@ -1192,31 +1347,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 6
+Storage> 7
Google Application Client Id - leave blank normally.
-client_id>
+client_id>
Google Application Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -1230,8 +1389,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
--------------------
y) Yes this is OK
@@ -1394,6 +1553,8 @@ y/e/d> y
+
--drive-skip-gdocs
+
Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.
Making your own client_id
@@ -1423,25 +1584,29 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 2
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
@@ -1482,21 +1647,27 @@ Choose a number from below, or type in your own value
/ Asia Pacific (Tokyo) Region
8 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
+ / Asia Pacific (Seoul)
+ 9 | Needs location constraint ap-northeast-2.
+ \ "ap-northeast-2"
+ / Asia Pacific (Mumbai)
+10 | Needs location constraint ap-south-1.
+ \ "ap-south-1"
/ South America (Sao Paulo) Region
- 9 | Needs location constraint sa-east-1.
+11 | Needs location constraint sa-east-1.
\ "sa-east-1"
/ If using an S3 clone that only understands v2 signatures
-10 | eg Ceph/Dreamhost
+12 | eg Ceph/Dreamhost
| set this and make sure you set the endpoint.
\ "other-v2-signature"
/ If using an S3 clone that understands v4 signatures set this
-11 | and make sure you set the endpoint.
+13 | and make sure you set the endpoint.
\ "other-v4-signature"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
-endpoint>
+endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
@@ -1515,7 +1686,11 @@ Choose a number from below, or type in your own value
\ "ap-southeast-2"
8 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
- 9 / South America (Sao Paulo) Region.
+ 9 / Asia Pacific (Seoul)
+ \ "ap-northeast-2"
+10 / Asia Pacific (Mumbai)
+ \ "ap-south-1"
+11 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
@@ -1562,8 +1737,11 @@ env_auth = false
access_key_id = access_key
secret_access_key = secret_key
region = us-east-1
-endpoint =
-location_constraint =
+endpoint =
+location_constraint =
+acl = private
+server_side_encryption =
+storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -1689,7 +1867,7 @@ access_key_id> WLGDGYAQYIGI833EV05A
secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region> us-east-1
endpoint> http://10.0.0.3:9000
-location_constraint>
+location_constraint>
server_side_encryption>
Minio doesn't support all the features of S3 yet. In particular it doesn't support MD5 checksums (ETags) or metadata. This means rclone can't check MD5SUMs or store the modified date. However you can work around this with the --size-only flag of rclone.
So once set up, for example to copy files into a bucket
@@ -1722,27 +1900,31 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 10
+Storage> 11
User name to log in.
user> user_name
API key or password.
@@ -1764,25 +1946,28 @@ Choose a number from below, or type in your own value
auth> 1
User domain - optional (v3 auth)
domain> Default
-Tenant name - optional
-tenant>
+Tenant name - optional for v1 auth, required otherwise
+tenant> tenant_name
Tenant domain - optional (v3 auth)
tenant_domain>
Region name - optional
-region>
+region>
Storage URL - optional
-storage_url>
-Remote config
+storage_url>
AuthVersion - optional - set to (1,2,3) if your auth URL has no version
-auth_version>
+auth_version>
+Remote config
--------------------
[remote]
user = user_name
key = password_or_api_key
auth = https://auth.api.rackspacecloud.com/v1.0
-tenant =
-region =
-storage_url =
+domain = Default
+tenant =
+tenant_domain =
+region =
+storage_url =
+auth_version =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -1848,39 +2033,43 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 4
Dropbox App Key - leave blank normally.
-app_key>
+app_key>
Dropbox App Secret - leave blank normally.
-app_secret>
+app_secret>
Remote config
Please visit:
https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
--------------------
[remote]
-app_key =
-app_secret =
+app_key =
+app_secret =
token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
--------------------
y) Yes this is OK
@@ -1926,65 +2115,68 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 5
+Storage> 6
Google Application Client Id - leave blank normally.
-client_id>
+client_id>
Google Application Client Secret - leave blank normally.
-client_secret>
+client_secret>
Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number> 12345678
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
-service_account_file>
+service_account_file>
Access Control List for new objects.
Choose a number from below, or type in your own value
- * Object owner gets OWNER access, and all Authenticated Users get READER access.
- 1) authenticatedRead
- * Object owner gets OWNER access, and project team owners get OWNER access.
- 2) bucketOwnerFullControl
- * Object owner gets OWNER access, and project team owners get READER access.
- 3) bucketOwnerRead
- * Object owner gets OWNER access [default if left blank].
- 4) private
- * Object owner gets OWNER access, and project team members get access according to their roles.
- 5) projectPrivate
- * Object owner gets OWNER access, and all Users get READER access.
- 6) publicRead
+ 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
+ \ "authenticatedRead"
+ 2 / Object owner gets OWNER access, and project team owners get OWNER access.
+ \ "bucketOwnerFullControl"
+ 3 / Object owner gets OWNER access, and project team owners get READER access.
+ \ "bucketOwnerRead"
+ 4 / Object owner gets OWNER access [default if left blank].
+ \ "private"
+ 5 / Object owner gets OWNER access, and project team members get access according to their roles.
+ \ "projectPrivate"
+ 6 / Object owner gets OWNER access, and all Users get READER access.
+ \ "publicRead"
object_acl> 4
Access Control List for new buckets.
Choose a number from below, or type in your own value
- * Project team owners get OWNER access, and all Authenticated Users get READER access.
- 1) authenticatedRead
- * Project team owners get OWNER access [default if left blank].
- 2) private
- * Project team members get access according to their roles.
- 3) projectPrivate
- * Project team owners get OWNER access, and all Users get READER access.
- 4) publicRead
- * Project team owners get OWNER access, and all Users get WRITER access.
- 5) publicReadWrite
+ 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
+ \ "authenticatedRead"
+ 2 / Project team owners get OWNER access [default if left blank].
+ \ "private"
+ 3 / Project team members get access according to their roles.
+ \ "projectPrivate"
+ 4 / Project team owners get OWNER access, and all Users get READER access.
+ \ "publicRead"
+ 5 / Project team owners get OWNER access, and all Users get WRITER access.
+ \ "publicReadWrite"
bucket_acl> 2
Remote config
-Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine or Y didn't work
@@ -1998,8 +2190,8 @@ Got code
--------------------
[remote]
type = google cloud storage
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
project_number = 12345678
object_acl = private
@@ -2041,40 +2233,50 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 1
Amazon Application Client Id - leave blank normally.
-client_id>
+client_id>
Amazon Application Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
--------------------
y) Yes this is OK
@@ -2094,7 +2296,7 @@ y/e/d> y
Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.
Deleting files
-
Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website.
+
Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
Using with non .com Amazon accounts
Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.
Specific options
@@ -2114,10 +2316,10 @@ y/e/d> y
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.
-
Microsoft One Drive
+
Microsoft OneDrive
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory.
-
The initial setup for One Drive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.
+
The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
This will guide you through an interactive setup process:
@@ -2130,31 +2332,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 9
+Storage> 10
Microsoft App Client Id - leave blank normally.
-client_id>
+client_id>
Microsoft App Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -2168,8 +2374,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
@@ -2179,17 +2385,17 @@ y/e/d> y
See the remote setup docs for how to set it up on a machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
-
List directories in top level of your One Drive
+
List directories in top level of your OneDrive
rclone lsd remote:
-
List all the files in your One Drive
+
List all the files in your OneDrive
rclone ls remote:
-
To copy a local directory to an One Drive directory called backup
+
To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
Modified time and hashes
-
One Drive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
+
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
One drive supports SHA1 type hashes, so you can use --checksum flag.
Deleting files
-
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the One Drive website.
+
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
Specific options
Here are the command line options specific to this cloud storage system.
--onedrive-chunk-size=SIZE
@@ -2197,9 +2403,9 @@ y/e/d> y
--onedrive-upload-cutoff=SIZE
Cutoff for switching to chunked upload - must be <= 100MB. The default is 10MB.
Limitations
-
Note that One Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
-
Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!
-
There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
+
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+
Rclone only supports your default OneDrive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!
+
There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
The largest allowed file size is 10GiB (10,737,418,240 bytes).
Hubic
Paths are specified as remote:path
@@ -2216,31 +2422,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 7
+Storage> 8
Hubic Client Id - leave blank normally.
-client_id>
+client_id>
Hubic Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -2254,8 +2464,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
@@ -2295,25 +2505,29 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 3
Account ID
@@ -2321,13 +2535,13 @@ account> 123456789abc
Application Key
key> 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
-endpoint>
+endpoint>
Remote config
--------------------
[remote]
account = 123456789abc
key = 0123456789abcdef0123456789abcdef0123456789
-endpoint =
+endpoint =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -2442,31 +2656,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 11
+Storage> 13
Yandex Client Id - leave blank normally.
-client_id>
+client_id>
Yandex Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -2480,8 +2698,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
--------------------
y) Yes this is OK
@@ -2503,6 +2721,93 @@ y/e/d> y
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.
MD5 checksums
MD5 checksums are natively supported by Yandex Disk.
It runs over SSH v2 and is standard with most modern SSH installations.
+
Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the users home directory.
+
Here is an example of making a SFTP configuration. First run
+
rclone config
+
This will guide you through an interactive setup process. You will need your account number (a short hex number) and key (a long hex number) which you can get from the SFTP control panel.
+
No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 7 / Google Drive
+ \ "drive"
+ 8 / Hubic
+ \ "hubic"
+ 9 / Local Disk
+ \ "local"
+10 / Microsoft OneDrive
+ \ "onedrive"
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
+ \ "yandex"
+Storage> 12
+SSH host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "example.com"
+host> example.com
+SSH username, leave blank for current username, ncw
+user>
+SSH port
+port>
+SSH password, leave blank to use ssh-agent
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> n
+Remote config
+--------------------
+[remote]
+host = example.com
+user =
+port =
+pass =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+
This remote is called remote and can now be used like this
+
See all directories in the home directory
+
rclone lsd remote:
+
Make a new directory
+
rclone mkdir remote:path/to/directory
+
List the contents of a directory
+
rclone ls remote:path/to/directory
+
Sync /home/local/directory to the remote directory, deleting any excess files in the directory.
Modified times are stored on the server to 1 second precision.
+
Modified times are used in syncing and are fully supported.
+
Limitations
+
SFTP does not support any checksums.
+
SFTP isn't supported under plan9 until this issue is fixed.
+
Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth
+
Note that --timeout isn't supported (but --contimeout is).
Crypt
The crypt remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
@@ -2538,12 +2843,14 @@ Choose a number from below, or type in your own value
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-12 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 5
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
-"myremote:bucket" or "myremote:"
+"myremote:bucket" or maybe "myremote:" (not recommended).
remote> remote:path
How to encrypt the filenames.
Choose a number from below, or type in your own value
@@ -2590,7 +2897,11 @@ d) Delete this remote
y/e/d> y
Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.
A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.
-
Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing
+
Note that rclone does not encrypt
+
+
file length - this can be calcuated within 16 bytes
+
modification time - used for syncing
+
Specifying the remote
In normal use, make sure the remote has a : in. If you specify the remote without a : then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files then rclone will encrypt stuff to that directory. If you use a remote of name then rclone will put files in a directory called name in the current directory.
If you specify the remote as remote:path/to/dir then rclone will store encrypted files in path/to/dir on the remote. If you are using file name encryption, then when you save files to secret:subdir/subfile this will store them in the unencrypted path path/to/dir but the subdir/subpath bit will be encrypted.
Here are some of the features of the file name encryption modes
-
Off * doesn't hide file names or directory structure * allows for longer file names (~246 characters) * can use sub paths and copy single files
-
Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion
+
Off
+
+
doesn't hide file names or directory structure
+
allows for longer file names (~246 characters)
+
can use sub paths and copy single files
+
+
Standard
+
+
file names encrypted
+
file names can't be as long (~156 characters)
+
can use sub paths and copy single files
+
directory structure visibile
+
identical files names will have identical uploaded names
+
can use shortcuts to shorten the directory recursion
+
Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future which will address the long file name problem.
Modified time and hashes
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
+
Note that you should use the rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can't check the checksums properly.
+
Specific options
+
Here are the command line options specific to this cloud storage system.
+
--crypt-show-mapping
+
If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.
+
This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.
+
Backing up a crypted remote
+
If you wish to backup a crypted remote, it it recommended that you use rclone sync on the encrypted files, and make sure the passwords are the same in the new encrypted remote.
+
This will have the following advantages
+
+
rclone sync will check the checksums while copying
+
you can use rclone check between the encrypted remotes
+
you don't decrypt and encrypt unecessarily
+
+
For example, let's say you have your original remote at remote: with the encrypted version at eremote: with path remote:crypt. You would then set up the new remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:.
+
To sync the two remotes you would do
+
rclone sync remote:crypt remote2:crypt
+
And to check the integrity you would do
+
rclone check remote:crypt remote2:crypt
File formats
File encryption
Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.
@@ -2697,7 +3040,7 @@ $ rclone -q ls secret:
rclone sync /home/source /tmp/destination
Will sync /home/source to /tmp/destination
These can be configured into the config file for consistencies sake, but it is probably easier not to.
-
Modified time
+
Modified time
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames
Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
@@ -2717,8 +3060,31 @@ nounc = true
And use rclone like this:
rclone copy c:\src nounc:z:\dst
This will use UNC paths on c:\src but not on z:\dst. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.
-
Specific options
+
Specific options
Here are the command line options specific to local storage
+
--copy-links, -L
+
Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
+
If you supply this flag then rclone will follow the symlink and copy the pointed to file or directory.
+
This flag applies to all commands.
+
For example, supposing you have a directory structure like this
+
$ tree /tmp/a
+/tmp/a
+├── b -> ../b
+├── expected -> ../expected
+├── one
+└── two
+ └── three
+
Then you can see the difference with and without the flag like this
+
$ rclone ls /tmp/a
+ 6 one
+ 6 two/three
+
and
+
$ rclone -L ls /tmp/a
+ 4174 expected
+ 6 one
+ 6 two/three
+ 6 b/two
+ 6 b/one
--one-file-system, -x
This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.
For example if you have a directory heirachy like this
@@ -2742,6 +3108,101 @@ nounc = true
NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag.
Changelog
+
v1.36 - 2017-03-18
+
+
New Features
+
SFTP remote (Jack Schmidt)
+
Re-implement sync routine to work a directory at a time reducing memory usage
+
Logging revamped to be more inline with rsync - now much quieter
+
+
-v only shows transfers
+
-vv is for full debug
+
--syslog to log to syslog on capable platforms
+
+
Implement --backup-dir and --suffix
+
Implement --track-renames (initial implementation by Bjørn Erik Pedersen)
+
Add time-based bandwidth limits (Lukas Loesche)
+
rclone cryptcheck: checks integrity of crypt remotes
+
Allow all config file variables and options to be set from environment variables
+
Add --buffer-size parameter to control buffer size for copy
+
Make --delete-after the default
+
Add --ignore-checksum flag (fixed by Hisham Zarka)
+
rclone check: Add --download flag to check all the data, not just hashes
+
rclone cat: add --head, --tail, --offset, --count and --discard
+
rclone config: when choosing from a list, allow the value to be entered too
+
rclone config: allow rename and copy of remotes
+
rclone obscure: for generating encrypted passwords for rclone's config (T.C. Ferguson)
+
Comply with XDG Base Directory specification (Dario Giovannetti)
+
+
this moves the default location of the config file in a backwards compatible way
+
+
Release changes
+
+
Ubuntu snap support (Dedsec1)
+
Compile with go 1.8
+
MIPS/Linux big and little endian support
+
+
Bug Fixes
+
Fix copyto copying things to the wrong place if the destination dir didn't exist
+
Fix parsing of remotes in moveto and copyto
+
Fix --delete-before deleting files on copy
+
Fix --files-from with an empty file copying everything
+
Fix sync: don't update mod times if --dry-run set
+
Fix MimeType propagation
+
Fix filters to add ** rules to directory rules
+
Local
+
Implement -L, --copy-links flag to allow rclone to follow symlinks
+
Open files in write only mode so rclone can write to an rclone mount
diff --git a/MANUAL.md b/MANUAL.md
index ba1be9f85..278eb8ae3 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Jan 02, 2017
+% Mar 18, 2017
Rclone
======
@@ -19,6 +19,7 @@ Rclone is a command line program to sync files and directories to and from
* Hubic
* Backblaze B2
* Yandex Disk
+ * SFTP
* The local filesystem
Features
@@ -66,9 +67,9 @@ Fetch and unpack
Copy binary file
- sudo cp rclone /usr/sbin/
- sudo chown root:root /usr/sbin/rclone
- sudo chmod 755 /usr/sbin/rclone
+ sudo cp rclone /usr/bin/
+ sudo chown root:root /usr/bin/rclone
+ sudo chmod 755 /usr/bin/rclone
Install manpage
@@ -131,13 +132,67 @@ Instructions
- rclone
```
+## Installation with snap ##
+
+### Quickstart ###
+
+ * install Snapd on your distro using the instructions below
+ * sudo snap install rclone --classic
+ * Run `rclone config` to setup. See [rclone config docs](http://rclone.org/docs/) for more details.
+
+See below for how to install snapd if it isn't already installed
+
+#### Arch ####
+
+ sudo pacman -S snapd
+
+enable the snapd systemd service:
+
+ sudo systemctl enable --now snapd.socket
+
+#### Debian / Ubuntu ####
+
+ sudo apt install snapd
+
+#### Fedora ####
+
+ sudo dnf copr enable zyga/snapcore
+ sudo dnf install snapd
+
+enable the snapd systemd service:
+
+ sudo systemctl enable --now snapd.service
+
+SELinux support is in beta, so currently:
+
+ sudo setenforce 0
+
+to persist, edit `/etc/selinux/config` to set `SELINUX=permissive` and reboot.
+
+#### Gentoo ####
+
+Install the [gentoo-snappy overlay](https://github.com/zyga/gentoo-snappy).
+
+#### OpenEmbedded/Yocto ####
+
+Install the [snap meta layer](https://github.com/morphis/meta-snappy/blob/master/README.md).
+
+#### openSUSE ####
+
+ sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy
+ sudo zypper install snapd
+
+#### OpenWrt ####
+
+Enable the snap-openwrt feed.
+
Configure
---------
First you'll need to configure rclone. As the object storage systems
-have quite complicated authentication these are kept in a config file
-`.rclone.conf` in your home directory by default. (You can use the
-`--config` option to choose a different config file.)
+have quite complicated authentication these are kept in a config file.
+(See the `--config` entry for how to find the config file and choose
+its location.)
The easiest way to make the config is to run rclone with the config
option:
@@ -157,6 +212,7 @@ See the following for detailed instructions for
* [Hubic](http://rclone.org/hubic/)
* [Microsoft One Drive](http://rclone.org/onedrive/)
* [Yandex Disk](http://rclone.org/yandex/)
+ * [SFTP](http://rclone.org/sftp/)
* [Crypt](http://rclone.org/crypt/) - to encrypt other remotes
Usage
@@ -393,17 +449,29 @@ Checks the files in the source and destination match.
-Checks the files in the source and destination match. It
-compares sizes and MD5SUMs and prints a report of files which
-don't match. It doesn't alter the source or destination.
+Checks the files in the source and destination match. It compares
+sizes and hashes (MD5 or SHA1) and logs a report of files which don't
+match. It doesn't alter the source or destination.
-`--size-only` may be used to only compare the sizes, not the MD5SUMs.
+If you supply the --size-only flag, it will only compare the sizes not
+the hashes as well. Use this for a quick check.
+
+If you supply the --download flag, it will download the data from
+both remotes and check them against each other on the fly. This can
+be useful for remotes that don't support hashes or if you really want
+to check all the data.
```
rclone check source:path dest:path
```
+### Options
+
+```
+ --download Check by downloading rather than with hash.
+```
+
## rclone ls
List all the objects in the path with size and path.
@@ -649,11 +717,26 @@ Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
+Use the --head flag to print characters only at the start, --tail for
+the end and --offset and --count to print a section in the middle.
+Note that if offset is negative it will count from the end, so
+--offset -1 --count 1 is equivalent to --tail 1.
+
```
rclone cat remote:path
```
+### Options
+
+```
+ --count int Only print N characters. (default -1)
+ --discard Discard the output instead of printing.
+ --head int Only print the first N characters.
+ --offset int Start printing at offset N (or from end if -ve).
+ --tail int Only print the last N characters.
+```
+
## rclone copyto
Copy files from source to dest, skipping already copied
@@ -693,6 +776,42 @@ destination.
rclone copyto source:path dest:path
```
+## rclone cryptcheck
+
+Cryptcheck checks the integritity of a crypted remote.
+
+### Synopsis
+
+
+
+rclone cryptcheck checks a remote against a crypted remote. This is
+the equivalent of running rclone check, but able to check the
+checksums of the crypted remote.
+
+For it to work the underlying remote of the cryptedremote must support
+some kind of checksum.
+
+It works by reading the nonce from each file on the cryptedremote: and
+using that to encrypt each file on the remote:. It then checks the
+checksum of the underlying file on the cryptedremote: against the
+checksum of the file it has just encrypted.
+
+Use it like this
+
+ rclone cryptcheck /path/to/files encryptedremote:path
+
+You can use it like this also, but that will involve downloading all
+the files in remote:path.
+
+ rclone cryptcheck remote:path encryptedremote:path
+
+After it has run it will log the status of the encryptedremote:.
+
+
+```
+rclone cryptcheck remote:path cryptedremote:path
+```
+
## rclone genautocomplete
Output bash completion script for rclone.
@@ -775,7 +894,7 @@ This is **EXPERIMENTAL** - use with care.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
-Start the mount like this
+Start the mount like this (note the & on the end to put rclone in the background).
rclone mount remote:path/to/files /path/to/local/mount &
@@ -783,23 +902,27 @@ Stop the mount with
fusermount -u /path/to/local/mount
+Or if that fails try
+
+ fusermount -z -u /path/to/local/mount
+
Or with OS X
- umount -u /path/to/local/mount
+ umount /path/to/local/mount
### Limitations ###
This can only write files seqentially, it can only seek when reading.
+This means that many applications won't work with their files on an
+rclone mount.
-Rclone mount inherits rclone's directory handling. In rclone's world
-directories don't really exist. This means that empty directories
-will have a tendency to disappear once they fall out of the directory
-cache.
-
-The bucket based FSes (eg swift, s3, google compute storage, b2) won't
-work from the root - you will need to specify a bucket, or a path
-within the bucket. So `swift:` won't work whereas `swift:bucket` will
-as will `swift:bucket/path`.
+The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
+Hubic) won't work from the root - you will need to specify a bucket,
+or a path within the bucket. So `swift:` won't work whereas
+`swift:bucket` will as will `swift:bucket/path`.
+None of these support the concept of directories, so empty
+directories will have a tendency to disappear once they fall out of
+the directory cache.
Only supported on Linux, FreeBSD and OS X at the moment.
@@ -812,6 +935,11 @@ can't use retries in the same way without making local copies of the
uploads. This might happen in the future, but for the moment rclone
mount won't do that, so will be less reliable than the rclone command.
+### Filters ###
+
+Note that all the rclone filters can be used to select a subset of the
+files to be visible in the mount.
+
### Bugs ###
* All the remotes should work for read, but some may not for write
@@ -891,6 +1019,19 @@ transfer.
rclone moveto source:path dest:path
```
+## rclone obscure
+
+Obscure password for use in the rclone.conf
+
+### Synopsis
+
+
+Obscure password for use in the rclone.conf
+
+```
+rclone obscure password
+```
+
## rclone rmdirs
Remove any empty directoryies under the path.
@@ -978,8 +1119,8 @@ If you are using the root directory on its own then don't quote it
Server Side Copy
----------------
-Drive, S3, Dropbox, Swift and Google Cloud Storage support server side
-copy.
+Most remotes (but not all - see [the
+overview](/overview/#optional-features)) support server side copy.
This means if you want to copy one folder to another then rclone won't
download all the files and re-upload them; it will instruct the server
@@ -992,11 +1133,12 @@ Eg
Will copy the contents of `oldbucket` to `newbucket` without
downloading and re-uploading.
-Remotes which don't support server side copy (eg local) **will**
-download and re-upload in this case.
+Remotes which don't support server side copy **will** download and
+re-upload in this case.
Server side copies are used with `sync` and `copy` and will be
-identified in the log when using the `-v` flag.
+identified in the log when using the `-v` flag. The may also be used
+with `move` if the remote doesn't support server side move.
Server side copies will only be attempted if the remote names are the
same.
@@ -1021,15 +1163,60 @@ for bytes, `k` for kBytes, `M` for MBytes and `G` for GBytes may be
used. These are the binary units, eg 1, 2\*\*10, 2\*\*20, 2\*\*30
respectively.
-### --bwlimit=SIZE ###
+### --backup-dir=DIR ###
-Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is `0`
-which means to not limit bandwidth.
+When using `sync`, `copy` or `move` any files which would have been
+overwritten or deleted are moved in their original hierarchy into this
+directory.
+
+If `--suffix` is set, then the moved files will have the suffix added
+to them. If there is a file with the same path (after the suffix has
+been added) in DIR, then it will be overwritten.
+
+The remote in use must support server side move or copy and you must
+use the same remote as the destination of the sync. The backup
+directory must not overlap the destination directory.
+
+For example
+
+ rclone sync /path/to/local remote:current --backup-dir remote:old
+
+will sync `/path/to/local` to `remote:current`, but for any files
+which would have been updated or deleted will be stored in
+`remote:old`.
+
+If running rclone from a script you might want to use today's date as
+the directory name passed to `--backup-dir` to store the old files, or
+you might want to pass `--suffix` with today's date.
+
+### --bwlimit=BANDWIDTH_SPEC ###
+
+This option controls the bandwidth limit. Limits can be specified
+in two ways: As a single limit, or as a timetable.
+
+Single limits last for the duration of the session. To use a single limit,
+specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The
+default is `0` which means to not limit bandwidth.
For example to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M`
-This only limits the bandwidth of the data transfer, it doesn't limit
-the bandwith of the directory listings etc.
+It is also possible to specify a "timetable" of limits, which will cause
+certain limits to be applied at certain times. To specify a timetable, format your
+entries as "HH:MM,BANDWIDTH HH:MM,BANDWITH...".
+
+An example of a typical timetable to avoid link saturation during daytime
+working hours could be:
+
+`--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"`
+
+In this example, the transfer bandwidth will be set to 512kBytes/sec at 8am.
+At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm.
+At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be
+completely disabled (full speed). Anything between 11pm and 8am will remain
+unlimited.
+
+Bandwidth limits only apply to the data transfer. The don't apply to the
+bandwith of the directory listings etc.
Note that the units are Bytes/s not Bits/s. Typically connections are
measured in Bits/s - to convert divide by 8. For example let's say
@@ -1037,6 +1224,13 @@ you have a 10 Mbit/s connection and you wish rclone to use half of it
- 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a `--bwlimit
0.625M` parameter for rclone.
+### --buffer-size=SIZE ###
+
+Use this sized buffer to speed up file transfers. Each `--transfer`
+will use this much memory for buffering.
+
+Set to 0 to disable the buffering for the minimum memory use.
+
### --checkers=N ###
The number of checkers to run in parallel. Checkers do the equality
@@ -1068,11 +1262,18 @@ they are incorrect as it would normally.
### --config=CONFIG_FILE ###
-Specify the location of the rclone config file. Normally this is in
-your home directory as a file called `.rclone.conf`. If you run
-`rclone -h` and look at the help for the `--config` option you will
-see where the default location is for you. Use this flag to override
-the config location, eg `rclone --config=".myconfig" .config`.
+Specify the location of the rclone config file.
+
+Normally the config file is in your home directory as a file called
+`.config/rclone/rclone.conf` (or `.rclone.conf` if created with an
+older version). If `$XDG_CONFIG_HOME` is set it will be at
+`$XDG_CONFIG_HOME/rclone/rclone.conf`
+
+If you run `rclone -h` and look at the help for the `--config` option
+you will see where the default location is for you.
+
+Use this flag to override the config location, eg `rclone
+--config=".myconfig" .config`.
### --contimeout=TIME ###
@@ -1093,6 +1294,15 @@ Do a trial run with no permanent changes. Use this to see what rclone
would do without actually doing it. Useful when setting up the `sync`
command which deletes files in the destination.
+### --ignore-checksum ###
+
+Normally rclone will check that the checksums of transferred files
+match, and give an error "corrupted on transfer" if they don't.
+
+You can use this option to skip that check. You should only use it if
+you have had the "corrupted on transfer" error message and you are
+sure you might want to transfer potentially corrupted data.
+
### --ignore-existing ###
Using this option will make rclone unconditionally skip all files
@@ -1132,6 +1342,22 @@ This can be useful for tracking down problems with syncs in
combination with the `-v` flag. See the Logging section for more
info.
+### --log-level LEVEL ###
+
+This sets the log level for rclone. The default log level is `INFO`.
+
+`DEBUG` is equivalent to `-vv`. It outputs lots of debug info - useful
+for bug reports and really finding out what rclone is doing.
+
+`INFO` is equivalent to `-v`. It outputs information about each transfer
+and prints stats once a minute by default.
+
+`NOTICE` is the default log level if no logging flags are supplied. It
+outputs very little when things are working normally. It outputs
+warnings and significant events.
+
+`ERROR` is equivalent to `-q`. It only output error messages.
+
### --low-level-retries NUMBER ###
This controls the number of low level retries rclone does.
@@ -1248,6 +1474,51 @@ equals 1,048,576 bits/s and not 1,000,000 bits/s.
The default is `bytes`.
+### --suffix=SUFFIX ###
+
+This is for use with `--backup-dir` only. If this isn't set then
+`--backup-dir` will move files with their original name. If it is set
+then the files will have SUFFIX added on to them.
+
+See `--backup-dir` for more info.
+
+### --syslog ###
+
+On capable OSes (not Windows or Plan9) send all log output to syslog.
+
+This can be useful for running rclone in script or `rclone mount`.
+
+### --syslog-facility string ###
+
+If using `--syslog` this sets the syslog facility (eg `KERN`, `USER`).
+See `man syslog` for a list of possible facilities. The default
+facility is `DAEMON`.
+
+### --track-renames ###
+
+By default rclone doesn't not keep track of renamed files, so if you
+rename a file locally then sync it to a remote, rclone will delete the
+old file on the remote and upload a new copy.
+
+If you use this flag, and the remote supports server side copy or
+server side move, and the source and destination have a compatible
+hash, then this will track renames during `sync`, `copy`, and `move`
+operations and perform renaming server-side.
+
+Files will be matched by size and hash - if both match then a rename
+will be considered.
+
+If the destination does not support server-side copy or move, rclone
+will fall back to the default behaviour and log an error level message
+to the console.
+
+Note that `--track-renames` is incompatible with `--no-traverse` and
+that it uses extra memory to keep track of all the rename candidates.
+
+Note also that `--track-renames` is incompatible with
+`--delete-before` and will select `--delete-after` instead of
+`--delete-during`.
+
### --delete-(before,during,after) ###
This option allows you to specify when files on your destination are
@@ -1255,16 +1526,21 @@ deleted when you sync folders.
Specifying the value `--delete-before` will delete all files present
on the destination, but not on the source *before* starting the
-transfer of any new or updated files. This uses extra memory as it
-has to store the source listing before proceeding.
+transfer of any new or updated files. This uses two passes through the
+file systems, one for the deletions and one for the copies.
-Specifying `--delete-during` (default value) will delete files while
-checking and uploading files. This is usually the fastest option.
-Currently this works the same as `--delete-after` but it may change in
-the future.
+Specifying `--delete-during` will delete files while checking and
+uploading files. This is the fastest option and uses the least memory.
-Specifying `--delete-after` will delay deletion of files until all new/updated
-files have been successfully transfered.
+Specifying `--delete-after` (the default value) will delay deletion of
+files until all new/updated files have been successfully transfered.
+The files to be deleted are collected in the copy pass then deleted
+after the copy pass has completed sucessfully. The files to be
+deleted are held in memory so this mode may use more memory. This is
+the safest mode as it will only delete files if there have been no
+errors subsequent to that. If there have been errors before the
+deletions start then you will get the message `not deleting files as
+there were IO errors`.
### --timeout=TIME ###
@@ -1300,12 +1576,14 @@ This can be useful when transferring to a remote which doesn't support
mod times directly as it is more accurate than a `--size-only` check
and faster than using `--checksum`.
-### -v, --verbose ###
+### -v, -vv, --verbose ###
-If you set this flag, rclone will become very verbose telling you
-about every file it considers and transfers.
+With `-v` rclone will tell you about each file that is transferred and
+a small number of significant events.
-Very useful for debugging.
+With `-vv` rclone will become very verbose telling you about every
+file it considers and transfers. Please send bug reports with a log
+with this setting.
### -V, --version ###
@@ -1454,6 +1732,8 @@ This option defaults to `false`.
The `--no-traverse` flag controls whether the destination file system
is traversed when using the `copy` or `move` commands.
+`--no-traverse` is not compatible with `sync` and will be ignored if
+you supply it with `sync`.
If you are only copying a small number of files and/or have a large
number of files on the destination then `--no-traverse` will stop
@@ -1492,40 +1772,114 @@ See the [filtering section](http://rclone.org/filtering/).
Logging
-------
-rclone has 3 levels of logging, `Error`, `Info` and `Debug`.
+rclone has 4 levels of logging, `Error`, `Notice`, `Info` and `Debug`.
-By default rclone logs `Error` and `Info` to standard error and `Debug`
-to standard output. This means you can redirect standard output and
-standard error to different places.
+By default rclone logs to standard error. This means you can redirect
+standard error and still see the normal output of rclone commands (eg
+`rclone ls`).
-By default rclone will produce `Error` and `Info` level messages.
+By default rclone will produce `Error` and `Notice` level messages.
If you use the `-q` flag, rclone will only produce `Error` messages.
-If you use the `-v` flag, rclone will produce `Error`, `Info` and
-`Debug` messages.
+If you use the `-v` flag, rclone will produce `Error`, `Notice` and
+`Info` messages.
+
+If you use the `-vv` flag, rclone will produce `Error`, `Notice`,
+`Info` and `Debug` messages.
+
+You can also control the log levels with the `--log-level` flag.
If you use the `--log-file=FILE` option, rclone will redirect `Error`,
`Info` and `Debug` messages along with standard error to FILE.
+If you use the `--syslog` flag then rclone will log to syslog and the
+`--syslog-facility` control which facility it uses.
+
+Rclone prefixes all log messages with their level in capitals, eg INFO
+which makes it easy to grep the log file for different kinds of
+information.
+
Exit Code
---------
-If any errors occurred during the command, rclone with an exit code of
-`1`. This allows scripts to detect when rclone operations have failed.
+If any errors occurred during the command, rclone will exit with a
+non-zero exit code. This allows scripts to detect when rclone
+operations have failed.
During the startup phase rclone will exit immediately if an error is
detected in the configuration. There will always be a log message
immediately before exiting.
When rclone is running it will accumulate errors as it goes along, and
-only exit with an non-zero exit code if (after retries) there were no
-transfers with errors remaining. For every error counted there will
-be a high priority log message (visibile with `-q`) showing the
-message and which file caused the problem. A high priority message is
-also shown when starting a retry so the user can see that any previous
-error messages may not be valid after the retry. If rclone has done a
-retry it will log a high priority message if the retry was successful.
+only exit with an non-zero exit code if (after retries) there were
+still failed transfers. For every error counted there will be a high
+priority log message (visibile with `-q`) showing the message and
+which file caused the problem. A high priority message is also shown
+when starting a retry so the user can see that any previous error
+messages may not be valid after the retry. If rclone has done a retry
+it will log a high priority message if the retry was successful.
+
+Environment Variables
+---------------------
+
+Rclone can be configured entirely using environment variables. These
+can be used to set defaults for options or config file entries.
+
+### Options ###
+
+Every option in rclone can have its default set by environment
+variable.
+
+To find the name of the environment variable, first take the long
+option name, strip the leading `--`, change `-` to `_`, make
+upper case and prepend `RCLONE_`.
+
+For example to always set `--stats 5s`, set the environment variable
+`RCLONE_STATS=5s`. If you set stats on the command line this will
+override the environment variable setting.
+
+Or to always use the trash in drive `--drive-use-trash`, set
+`RCLONE_DRIVE_USE_TRASH=true`.
+
+The same parser is used for the options and the environment variables
+so they take exactly the same form.
+
+### Config file ###
+
+You can set defaults for values in the config file on an individual
+remote basis. If you want to use this feature, you will need to
+discover the name of the config items that you want. The easiest way
+is to run through `rclone config` by hand, then look in the config
+file to see what the values are (the config file can be found by
+looking at the help for `--config` in `rclone help`).
+
+To find the name of the environment variable, you need to set, take
+`RCLONE_` + name of remote + `_` + name of config file option and make
+it all uppercase.
+
+For example to configure an S3 remote named `mys3:` without a config
+file (using unix ways of setting environment variables):
+
+```
+$ export RCLONE_CONFIG_MYS3_TYPE=s3
+$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
+$ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
+$ rclone lsd MYS3:
+ -1 2016-09-21 12:54:21 -1 my-bucket
+$ rclone listremotes | grep mys3
+mys3:
+```
+
+Note that if you want to create a remote using environment variables
+you must create the `..._TYPE` variable as above.
+
+### Other environment variables ###
+
+ * RCLONE_CONFIG_PASS` set to contain your config file password (see [Configuration Encryption](#configuration-encryption) section)
+ * HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof).
+ * HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
+ * The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed.
# Configuring rclone on a remote / headless machine #
@@ -2046,6 +2400,7 @@ Here is an overview of the major features of each cloud storage system.
| Hubic | MD5 | Yes | No | No | R/W |
| Backblaze B2 | SHA1 | Yes | No | No | R/W |
| Yandex Disk | MD5 | Yes | No | No | R/W |
+| SFTP | - | Yes | Depends | No | - |
| The local filesystem | All | Yes | Depends | No | - |
### Hash ###
@@ -2079,7 +2434,8 @@ This can cause problems when syncing between a case insensitive
system and a case sensitive system. The symptom of this is that no
matter how many times you run the sync it never completes fully.
-The local filesystem may or may not be case sensitive depending on OS.
+The local filesystem and SFTP may or may not be case sensitive
+depending on OS.
* Windows - usually case insensitive, though case is preserved
* OSX - usually case insensitive, though it is possible to format case sensitive
@@ -2128,10 +2484,11 @@ operations more efficient.
| Dropbox | Yes | Yes | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) |
| Google Cloud Storage | Yes | Yes | No | No | No |
| Amazon Drive | Yes | No | Yes | Yes | No [#575](https://github.com/ncw/rclone/issues/575) |
-| Microsoft One Drive | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) |
+| Microsoft One Drive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) |
| Hubic | Yes † | Yes | No | No | No |
| Backblaze B2 | No | No | No | No | Yes |
| Yandex Disk | Yes | No | No | No | No [#575](https://github.com/ncw/rclone/issues/575) |
+| SFTP | No | No | Yes | Yes | No |
| The local filesystem | Yes | No | Yes | Yes | No |
@@ -2204,31 +2561,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 6
+Storage> 7
Google Application Client Id - leave blank normally.
-client_id>
+client_id>
Google Application Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -2242,8 +2603,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
--------------------
y) Yes this is OK
@@ -2359,7 +2720,7 @@ Here are the possible extensions with their corresponding mime types.
| epub | application/epub+zip | E-book format |
| html | text/html | An HTML Document |
| jpg | image/jpeg | A JPEG Image File |
-| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation |
+| odp | application/vnd.oasis.opendocument.presentation | Openoffice Presentation |
| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
| odt | application/vnd.oasis.opendocument.text | Openoffice Document |
@@ -2374,6 +2735,10 @@ Here are the possible extensions with their corresponding mime types.
| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet |
| zip | application/zip | A ZIP file of HTML, Images CSS |
+#### --drive-skip-gdocs ####
+
+Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
+
### Limitations ###
Drive has quite a lot of rate limiting. This causes rclone to be
@@ -2441,25 +2806,29 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 2
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
@@ -2500,21 +2869,27 @@ Choose a number from below, or type in your own value
/ Asia Pacific (Tokyo) Region
8 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
+ / Asia Pacific (Seoul)
+ 9 | Needs location constraint ap-northeast-2.
+ \ "ap-northeast-2"
+ / Asia Pacific (Mumbai)
+10 | Needs location constraint ap-south-1.
+ \ "ap-south-1"
/ South America (Sao Paulo) Region
- 9 | Needs location constraint sa-east-1.
+11 | Needs location constraint sa-east-1.
\ "sa-east-1"
/ If using an S3 clone that only understands v2 signatures
-10 | eg Ceph/Dreamhost
+12 | eg Ceph/Dreamhost
| set this and make sure you set the endpoint.
\ "other-v2-signature"
/ If using an S3 clone that understands v4 signatures set this
-11 | and make sure you set the endpoint.
+13 | and make sure you set the endpoint.
\ "other-v4-signature"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
-endpoint>
+endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
@@ -2533,7 +2908,11 @@ Choose a number from below, or type in your own value
\ "ap-southeast-2"
8 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
- 9 / South America (Sao Paulo) Region.
+ 9 / Asia Pacific (Seoul)
+ \ "ap-northeast-2"
+10 / Asia Pacific (Mumbai)
+ \ "ap-south-1"
+11 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
@@ -2580,8 +2959,11 @@ env_auth = false
access_key_id = access_key
secret_access_key = secret_key
region = us-east-1
-endpoint =
-location_constraint =
+endpoint =
+location_constraint =
+acl = private
+server_side_encryption =
+storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -2779,7 +3161,7 @@ access_key_id> WLGDGYAQYIGI833EV05A
secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region> us-east-1
endpoint> http://10.0.0.3:9000
-location_constraint>
+location_constraint>
server_side_encryption>
```
@@ -2792,8 +3174,8 @@ access_key_id = WLGDGYAQYIGI833EV05A
secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region = us-east-1
endpoint = http://10.0.0.3:9000
-location_constraint =
-server_side_encryption =
+location_constraint =
+server_side_encryption =
```
Minio doesn't support all the features of S3 yet. In particular it
@@ -2833,27 +3215,31 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 10
+Storage> 11
User name to log in.
user> user_name
API key or password.
@@ -2875,25 +3261,28 @@ Choose a number from below, or type in your own value
auth> 1
User domain - optional (v3 auth)
domain> Default
-Tenant name - optional
-tenant>
+Tenant name - optional for v1 auth, required otherwise
+tenant> tenant_name
Tenant domain - optional (v3 auth)
tenant_domain>
Region name - optional
-region>
+region>
Storage URL - optional
-storage_url>
-Remote config
+storage_url>
AuthVersion - optional - set to (1,2,3) if your auth URL has no version
-auth_version>
+auth_version>
+Remote config
--------------------
[remote]
user = user_name
key = password_or_api_key
auth = https://auth.api.rackspacecloud.com/v1.0
-tenant =
-region =
-storage_url =
+domain = Default
+tenant =
+tenant_domain =
+region =
+storage_url =
+auth_version =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -2961,7 +3350,7 @@ system.
Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.
-
+
### Modified time ###
The modified time is stored as metadata on the object as
@@ -3024,39 +3413,43 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 4
Dropbox App Key - leave blank normally.
-app_key>
+app_key>
Dropbox App Secret - leave blank normally.
-app_secret>
+app_secret>
Remote config
Please visit:
https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
--------------------
[remote]
-app_key =
-app_secret =
+app_key =
+app_secret =
token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
--------------------
y) Yes this is OK
@@ -3146,65 +3539,68 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 5
+Storage> 6
Google Application Client Id - leave blank normally.
-client_id>
+client_id>
Google Application Client Secret - leave blank normally.
-client_secret>
+client_secret>
Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number> 12345678
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
-service_account_file>
+service_account_file>
Access Control List for new objects.
Choose a number from below, or type in your own value
- * Object owner gets OWNER access, and all Authenticated Users get READER access.
- 1) authenticatedRead
- * Object owner gets OWNER access, and project team owners get OWNER access.
- 2) bucketOwnerFullControl
- * Object owner gets OWNER access, and project team owners get READER access.
- 3) bucketOwnerRead
- * Object owner gets OWNER access [default if left blank].
- 4) private
- * Object owner gets OWNER access, and project team members get access according to their roles.
- 5) projectPrivate
- * Object owner gets OWNER access, and all Users get READER access.
- 6) publicRead
+ 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
+ \ "authenticatedRead"
+ 2 / Object owner gets OWNER access, and project team owners get OWNER access.
+ \ "bucketOwnerFullControl"
+ 3 / Object owner gets OWNER access, and project team owners get READER access.
+ \ "bucketOwnerRead"
+ 4 / Object owner gets OWNER access [default if left blank].
+ \ "private"
+ 5 / Object owner gets OWNER access, and project team members get access according to their roles.
+ \ "projectPrivate"
+ 6 / Object owner gets OWNER access, and all Users get READER access.
+ \ "publicRead"
object_acl> 4
Access Control List for new buckets.
Choose a number from below, or type in your own value
- * Project team owners get OWNER access, and all Authenticated Users get READER access.
- 1) authenticatedRead
- * Project team owners get OWNER access [default if left blank].
- 2) private
- * Project team members get access according to their roles.
- 3) projectPrivate
- * Project team owners get OWNER access, and all Users get READER access.
- 4) publicRead
- * Project team owners get OWNER access, and all Users get WRITER access.
- 5) publicReadWrite
+ 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
+ \ "authenticatedRead"
+ 2 / Project team owners get OWNER access [default if left blank].
+ \ "private"
+ 3 / Project team members get access according to their roles.
+ \ "projectPrivate"
+ 4 / Project team owners get OWNER access, and all Users get READER access.
+ \ "publicRead"
+ 5 / Project team owners get OWNER access, and all Users get WRITER access.
+ \ "publicReadWrite"
bucket_acl> 2
Remote config
-Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine or Y didn't work
@@ -3218,8 +3614,8 @@ Got code
--------------------
[remote]
type = google cloud storage
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
project_number = 12345678
object_acl = private
@@ -3314,40 +3710,50 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 1
Amazon Application Client Id - leave blank normally.
-client_id>
+client_id>
Amazon Application Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
--------------------
y) Yes this is OK
@@ -3392,7 +3798,8 @@ It does store MD5SUMs so for a more accurate sync, you can use the
Any files you delete with rclone will end up in the trash. Amazon
don't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Amazon's apps or via
-the Amazon Drive website.
+the Amazon Drive website. As of November 17, 2016, files are
+automatically deleted by Amazon from the trash after 30 days.
### Using with non `.com` Amazon accounts ###
@@ -3455,20 +3862,20 @@ larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file.
This means that larger files are likely to fail.
-Unfortunatly there is no way for rclone to see that this failure is
+Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
failure. To avoid this problem, use `--max-size 50000M` option to limit
the maximum size of uploaded files. Note that `--max-size` does not split
files into segments, it only ignores files over this size.
-Microsoft One Drive
+Microsoft OneDrive
-----------------------------------------
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
-The initial setup for One Drive involves getting a token from
+The initial setup for OneDrive involves getting a token from
Microsoft which you need to do in your browser. `rclone config` walks
you through it.
@@ -3488,31 +3895,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 9
+Storage> 10
Microsoft App Client Id - leave blank normally.
-client_id>
+client_id>
Microsoft App Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -3526,8 +3937,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
@@ -3547,21 +3958,21 @@ you to unblock it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this,
-List directories in top level of your One Drive
+List directories in top level of your OneDrive
rclone lsd remote:
-List all the files in your One Drive
+List all the files in your OneDrive
rclone ls remote:
-To copy a local directory to an One Drive directory called backup
+To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
### Modified time and hashes ###
-One Drive allows modification times to be set on objects accurate to 1
+OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
@@ -3573,7 +3984,7 @@ One drive supports SHA1 type hashes, so you can use `--checksum` flag.
Any files you delete with rclone will end up in the trash. Microsoft
doesn't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Microsoft's apps or via
-the One Drive website.
+the OneDrive website.
### Specific options ###
@@ -3592,14 +4003,14 @@ is 10MB.
### Limitations ###
-Note that One Drive is case insensitive so you can't have a
+Note that OneDrive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc".
-Rclone only supports your default One Drive, and doesn't work with One
+Rclone only supports your default OneDrive, and doesn't work with One
Drive for business. Both these issues may be fixed at some point
depending on user demand!
-There are quite a few characters that can't be in One Drive file
+There are quite a few characters that can't be in OneDrive file
names. These can't occur on Windows platforms, but on non-Windows
platforms they are common. Rclone will map these names to and from an
identical looking unicode equivalent. For example if a file has a `?`
@@ -3633,31 +4044,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 7
+Storage> 8
Hubic Client Id - leave blank normally.
-client_id>
+client_id>
Hubic Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -3671,8 +4086,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
@@ -3757,25 +4172,29 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 3
Account ID
@@ -3783,13 +4202,13 @@ account> 123456789abc
Application Key
key> 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
-endpoint>
+endpoint>
Remote config
--------------------
[remote]
account = 123456789abc
key = 0123456789abcdef0123456789abcdef0123456789
-endpoint =
+endpoint =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -4044,31 +4463,35 @@ Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+10 / Microsoft OneDrive
\ "onedrive"
-10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-11 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
-Storage> 11
+Storage> 13
Yandex Client Id - leave blank normally.
-client_id>
+client_id>
Yandex Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -4082,8 +4505,8 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
--------------------
y) Yes this is OK
@@ -4129,6 +4552,129 @@ metadata called `rclone_modified` in RFC3339 with nanoseconds format.
MD5 checksums are natively supported by Yandex Disk.
+SFTP
+----------------------------------------
+
+SFTP is the [Secure (or SSH) File Transfer
+Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
+
+It runs over SSH v2 and is standard with most modern SSH
+installations.
+
+Paths are specified as `remote:path`. If the path does not begin with
+a `/` it is relative to the home directory of the user. An empty path
+`remote:` refers to the users home directory.
+
+Here is an example of making a SFTP configuration. First run
+
+ rclone config
+
+This will guide you through an interactive setup process. You will
+need your account number (a short hex number) and key (a long hex
+number) which you can get from the SFTP control panel.
+```
+No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 7 / Google Drive
+ \ "drive"
+ 8 / Hubic
+ \ "hubic"
+ 9 / Local Disk
+ \ "local"
+10 / Microsoft OneDrive
+ \ "onedrive"
+11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
+ \ "yandex"
+Storage> 12
+SSH host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "example.com"
+host> example.com
+SSH username, leave blank for current username, ncw
+user>
+SSH port
+port>
+SSH password, leave blank to use ssh-agent
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> n
+Remote config
+--------------------
+[remote]
+host = example.com
+user =
+port =
+pass =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+This remote is called `remote` and can now be used like this
+
+See all directories in the home directory
+
+ rclone lsd remote:
+
+Make a new directory
+
+ rclone mkdir remote:path/to/directory
+
+List the contents of a directory
+
+ rclone ls remote:path/to/directory
+
+Sync `/home/local/directory` to the remote directory, deleting any
+excess files in the directory.
+
+ rclone sync /home/local/directory remote:directory
+
+### Modified time ###
+
+Modified times are stored on the server to 1 second precision.
+
+Modified times are used in syncing and are fully supported.
+
+### Limitations ###
+
+SFTP does not support any checksums.
+
+SFTP isn't supported under plan9 until [this
+issue](https://github.com/pkg/sftp/issues/156) is fixed.
+
+Note that since SFTP isn't HTTP based the following flags don't work
+with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`
+
+Note that `--timeout` isn't supported (but `--contimeout` is).
+
Crypt
----------------------------------------
@@ -4181,12 +4727,14 @@ Choose a number from below, or type in your own value
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-12 / Yandex Disk
+12 / SSH/SFTP Connection
+ \ "sftp"
+13 / Yandex Disk
\ "yandex"
Storage> 5
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
-"myremote:bucket" or "myremote:"
+"myremote:bucket" or maybe "myremote:" (not recommended).
remote> remote:path
How to encrypt the filenames.
Choose a number from below, or type in your own value
@@ -4243,6 +4791,7 @@ elsewhere it will be compatible - all the secrets used are derived
from those two passwords/passphrases.
Note that rclone does not encrypt
+
* file length - this can be calcuated within 16 bytes
* modification time - used for syncing
@@ -4333,11 +4882,13 @@ $ rclone -q ls remote:path
Here are some of the features of the file name encryption modes
Off
+
* doesn't hide file names or directory structure
* allows for longer file names (~246 characters)
* can use sub paths and copy single files
Standard
+
* file names encrypted
* file names can't be as long (~156 characters)
* can use sub paths and copy single files
@@ -4361,6 +4912,51 @@ depends on that.
Hashes are not stored for crypt. However the data integrity is
protected by an extremely strong crypto authenticator.
+Note that you should use the `rclone cryptcheck` command to check the
+integrity of a crypted remote instead of `rclone check` which can't
+check the checksums properly.
+
+### Specific options ###
+
+Here are the command line options specific to this cloud storage
+system.
+
+#### --crypt-show-mapping ####
+
+If this flag is set then for each file that the remote is asked to
+list, it will log (at level INFO) a line stating the decrypted file
+name and the encrypted file name.
+
+This is so you can work out which encrypted names are which decrypted
+names just in case you need to do something with the encrypted file
+names, or for debugging purposes.
+
+## Backing up a crypted remote ##
+
+If you wish to backup a crypted remote, it it recommended that you use
+`rclone sync` on the encrypted files, and make sure the passwords are
+the same in the new encrypted remote.
+
+This will have the following advantages
+
+ * `rclone sync` will check the checksums while copying
+ * you can use `rclone check` between the encrypted remotes
+ * you don't decrypt and encrypt unecessarily
+
+For example, let's say you have your original remote at `remote:` with
+the encrypted version at `eremote:` with path `remote:crypt`. You
+would then set up the new remote `remote2:` and then the encrypted
+version `eremote2:` with path `remote2:crypt` using the same passwords
+as `eremote:`.
+
+To sync the two remotes you would do
+
+ rclone sync remote:crypt remote2:crypt
+
+And to check the integrity you would do
+
+ rclone check remote:crypt remote2:crypt
+
## File formats ##
### File encryption ###
@@ -4536,6 +5132,47 @@ file exceeds 258 characters on z, so only use this option if you have to.
Here are the command line options specific to local storage
+#### --copy-links, -L ####
+
+Normally rclone will ignore symlinks or junction points (which behave
+like symlinks under Windows).
+
+If you supply this flag then rclone will follow the symlink and copy
+the pointed to file or directory.
+
+This flag applies to all commands.
+
+For example, supposing you have a directory structure like this
+
+```
+$ tree /tmp/a
+/tmp/a
+├── b -> ../b
+├── expected -> ../expected
+├── one
+└── two
+ └── three
+```
+
+Then you can see the difference with and without the flag like this
+
+```
+$ rclone ls /tmp/a
+ 6 one
+ 6 two/three
+```
+
+and
+
+```
+$ rclone -L ls /tmp/a
+ 4174 expected
+ 6 one
+ 6 two/three
+ 6 b/two
+ 6 b/one
+```
+
#### --one-file-system, -x ####
This tells rclone to stay in the filesystem specified by the root and
@@ -4580,6 +5217,89 @@ flag.
Changelog
---------
+ * v1.36 - 2017-03-18
+ * New Features
+ * SFTP remote (Jack Schmidt)
+ * Re-implement sync routine to work a directory at a time reducing memory usage
+ * Logging revamped to be more inline with rsync - now much quieter
+ * -v only shows transfers
+ * -vv is for full debug
+ * --syslog to log to syslog on capable platforms
+ * Implement --backup-dir and --suffix
+ * Implement --track-renames (initial implementation by Bjørn Erik Pedersen)
+ * Add time-based bandwidth limits (Lukas Loesche)
+ * rclone cryptcheck: checks integrity of crypt remotes
+ * Allow all config file variables and options to be set from environment variables
+ * Add --buffer-size parameter to control buffer size for copy
+ * Make --delete-after the default
+ * Add --ignore-checksum flag (fixed by Hisham Zarka)
+ * rclone check: Add --download flag to check all the data, not just hashes
+ * rclone cat: add --head, --tail, --offset, --count and --discard
+ * rclone config: when choosing from a list, allow the value to be entered too
+ * rclone config: allow rename and copy of remotes
+ * rclone obscure: for generating encrypted passwords for rclone's config (T.C. Ferguson)
+ * Comply with XDG Base Directory specification (Dario Giovannetti)
+ * this moves the default location of the config file in a backwards compatible way
+ * Release changes
+ * Ubuntu snap support (Dedsec1)
+ * Compile with go 1.8
+ * MIPS/Linux big and little endian support
+ * Bug Fixes
+ * Fix copyto copying things to the wrong place if the destination dir didn't exist
+ * Fix parsing of remotes in moveto and copyto
+ * Fix --delete-before deleting files on copy
+ * Fix --files-from with an empty file copying everything
+ * Fix sync: don't update mod times if --dry-run set
+ * Fix MimeType propagation
+ * Fix filters to add ** rules to directory rules
+ * Local
+ * Implement -L, --copy-links flag to allow rclone to follow symlinks
+ * Open files in write only mode so rclone can write to an rclone mount
+ * Fix unnormalised unicode causing problems reading directories
+ * Fix interaction between -x flag and --max-depth
+ * Mount
+ * Implement proper directory handling (mkdir, rmdir, renaming)
+ * Make include and exclude filters apply to mount
+ * Implement read and write async buffers - control with --buffer-size
+ * Fix fsync on for directories
+ * Fix retry on network failure when reading off crypt
+ * Crypt
+ * Add --crypt-show-mapping to show encrypted file mapping
+ * Fix crypt writer getting stuck in a loop
+ * **IMPORTANT** this bug had the potential to cause data corruption when
+ * reading data from a network based remote and
+ * writing to a crypt on Google Drive
+ * Use the cryptcheck command to validate your data if you are concerned
+ * If syncing two crypt remotes, sync the unencrypted remote
+ * Amazon Drive
+ * Fix panics on Move (rename)
+ * Fix panic on token expiry
+ * B2
+ * Fix inconsistent listings and rclone check
+ * Fix uploading empty files with go1.8
+ * Constrain memory usage when doing multipart uploads
+ * Fix upload url not being refreshed properly
+ * Drive
+ * Fix Rmdir on directories with trashed files
+ * Fix "Ignoring unknown object" when downloading
+ * Add --drive-list-chunk
+ * Add --drive-skip-gdocs (Károly Oláh)
+ * OneDrive
+ * Implement Move
+ * Fix Copy
+ * Fix overwrite detection in Copy
+ * Fix waitForJob to parse errors correctly
+ * Use token renewer to stop auth errors on long uploads
+ * Fix uploading empty files with go1.8
+ * Google Cloud Storage
+ * Fix depth 1 directory listings
+ * Yandex
+ * Fix single level directory listing
+ * Dropbox
+ * Normalise the case for single level directory listings
+ * Fix depth 1 listing
+ * S3
+ * Added ca-central-1 region (Jon Yergatian)
* v1.35 - 2017-01-02
* New Features
* moveto and copyto commands for choosing a destination name on copy/move
@@ -5341,6 +6061,17 @@ Contributors
* 0xJAKE <0xJAKE@users.noreply.github.com>
* Thibault Molleman
* Scott McGillivray
+ * Bjørn Erik Pedersen
+ * Lukas Loesche
+ * emyarod
+ * T.C. Ferguson
+ * Brandur
+ * Dario Giovannetti
+ * Károly Oláh
+ * Jon Yergatian
+ * Jack Schmidt
+ * Dedsec1
+ * Hisham Zarka
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index dc82efbf8..b9edce96d 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Jan 02, 2017
+Mar 18, 2017
@@ -22,6 +22,7 @@ from
- Hubic
- Backblaze B2
- Yandex Disk
+- SFTP
- The local filesystem
Features
@@ -74,9 +75,9 @@ Fetch and unpack
Copy binary file
- sudo cp rclone /usr/sbin/
- sudo chown root:root /usr/sbin/rclone
- sudo chmod 755 /usr/sbin/rclone
+ sudo cp rclone /usr/bin/
+ sudo chown root:root /usr/bin/rclone
+ sudo chmod 755 /usr/bin/rclone
Install manpage
@@ -140,12 +141,68 @@ Instructions
- rclone
+Installation with snap
+
+Quickstart
+
+- install Snapd on your distro using the instructions below
+- sudo snap install rclone --classic
+- Run rclone config to setup. See rclone config docs for more details.
+
+See below for how to install snapd if it isn't already installed
+
+Arch
+
+ sudo pacman -S snapd
+
+enable the snapd systemd service:
+
+ sudo systemctl enable --now snapd.socket
+
+Debian / Ubuntu
+
+ sudo apt install snapd
+
+Fedora
+
+ sudo dnf copr enable zyga/snapcore
+ sudo dnf install snapd
+
+enable the snapd systemd service:
+
+ sudo systemctl enable --now snapd.service
+
+SELinux support is in beta, so currently:
+
+ sudo setenforce 0
+
+to persist, edit /etc/selinux/config to set SELINUX=permissive and
+reboot.
+
+Gentoo
+
+Install the gentoo-snappy overlay.
+
+OpenEmbedded/Yocto
+
+Install the snap meta layer.
+
+openSUSE
+
+ sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy
+ sudo zypper install snapd
+
+OpenWrt
+
+Enable the snap-openwrt feed.
+
+
Configure
First you'll need to configure rclone. As the object storage systems
-have quite complicated authentication these are kept in a config file
-.rclone.conf in your home directory by default. (You can use the
---config option to choose a different config file.)
+have quite complicated authentication these are kept in a config file.
+(See the --config entry for how to find the config file and choose its
+location.)
The easiest way to make the config is to run rclone with the config
option:
@@ -165,6 +222,7 @@ See the following for detailed instructions for
- Hubic
- Microsoft One Drive
- Yandex Disk
+- SFTP
- Crypt - to encrypt other remotes
@@ -372,13 +430,23 @@ Checks the files in the source and destination match.
Synopsis
Checks the files in the source and destination match. It compares sizes
-and MD5SUMs and prints a report of files which don't match. It doesn't
-alter the source or destination.
+and hashes (MD5 or SHA1) and logs a report of files which don't match.
+It doesn't alter the source or destination.
---size-only may be used to only compare the sizes, not the MD5SUMs.
+If you supply the --size-only flag, it will only compare the sizes not
+the hashes as well. Use this for a quick check.
+
+If you supply the --download flag, it will download the data from both
+remotes and check them against each other on the fly. This can be useful
+for remotes that don't support hashes or if you really want to check all
+the data.
rclone check source:path dest:path
+Options
+
+ --download Check by downloading rather than with hash.
+
rclone ls
@@ -598,8 +666,21 @@ Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
+Use the --head flag to print characters only at the start, --tail for
+the end and --offset and --count to print a section in the middle. Note
+that if offset is negative it will count from the end, so --offset -1
+--count 1 is equivalent to --tail 1.
+
rclone cat remote:path
+Options
+
+ --count int Only print N characters. (default -1)
+ --discard Discard the output instead of printing.
+ --head int Only print the first N characters.
+ --offset int Start printing at offset N (or from end if -ve).
+ --tail int Only print the last N characters.
+
rclone copyto
@@ -635,6 +716,38 @@ time or MD5SUM. It doesn't delete files from the destination.
rclone copyto source:path dest:path
+rclone cryptcheck
+
+Cryptcheck checks the integritity of a crypted remote.
+
+Synopsis
+
+rclone cryptcheck checks a remote against a crypted remote. This is the
+equivalent of running rclone check, but able to check the checksums of
+the crypted remote.
+
+For it to work the underlying remote of the cryptedremote must support
+some kind of checksum.
+
+It works by reading the nonce from each file on the cryptedremote: and
+using that to encrypt each file on the remote:. It then checks the
+checksum of the underlying file on the cryptedremote: against the
+checksum of the file it has just encrypted.
+
+Use it like this
+
+ rclone cryptcheck /path/to/files encryptedremote:path
+
+You can use it like this also, but that will involve downloading all the
+files in remote:path.
+
+ rclone cryptcheck remote:path encryptedremote:path
+
+After it has run it will log the status of the encryptedremote:.
+
+ rclone cryptcheck remote:path cryptedremote:path
+
+
rclone genautocomplete
Output bash completion script for rclone.
@@ -702,7 +815,8 @@ This is EXPERIMENTAL - use with care.
First set up your remote using rclone config. Check it works with
rclone ls etc.
-Start the mount like this
+Start the mount like this (note the & on the end to put rclone in the
+background).
rclone mount remote:path/to/files /path/to/local/mount &
@@ -710,22 +824,26 @@ Stop the mount with
fusermount -u /path/to/local/mount
+Or if that fails try
+
+ fusermount -z -u /path/to/local/mount
+
Or with OS X
- umount -u /path/to/local/mount
+ umount /path/to/local/mount
Limitations
This can only write files seqentially, it can only seek when reading.
+This means that many applications won't work with their files on an
+rclone mount.
-Rclone mount inherits rclone's directory handling. In rclone's world
-directories don't really exist. This means that empty directories will
-have a tendency to disappear once they fall out of the directory cache.
-
-The bucket based FSes (eg swift, s3, google compute storage, b2) won't
-work from the root - you will need to specify a bucket, or a path within
-the bucket. So swift: won't work whereas swift:bucket will as will
-swift:bucket/path.
+The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
+Hubic) won't work from the root - you will need to specify a bucket, or
+a path within the bucket. So swift: won't work whereas swift:bucket will
+as will swift:bucket/path. None of these support the concept of
+directories, so empty directories will have a tendency to disappear once
+they fall out of the directory cache.
Only supported on Linux, FreeBSD and OS X at the moment.
@@ -738,6 +856,11 @@ retries in the same way without making local copies of the uploads. This
might happen in the future, but for the moment rclone mount won't do
that, so will be less reliable than the rclone command.
+Filters
+
+Note that all the rclone filters can be used to select a subset of the
+files to be visible in the mount.
+
Bugs
- All the remotes should work for read, but some may not for write
@@ -808,6 +931,17 @@ flag.
rclone moveto source:path dest:path
+rclone obscure
+
+Obscure password for use in the rclone.conf
+
+Synopsis
+
+Obscure password for use in the rclone.conf
+
+ rclone obscure password
+
+
rclone rmdirs
Remove any empty directoryies under the path.
@@ -889,8 +1023,7 @@ If you are using the root directory on its own then don't quote it (see
Server Side Copy
-Drive, S3, Dropbox, Swift and Google Cloud Storage support server side
-copy.
+Most remotes (but not all - see the overview) support server side copy.
This means if you want to copy one folder to another then rclone won't
download all the files and re-upload them; it will instruct the server
@@ -903,11 +1036,12 @@ Eg
Will copy the contents of oldbucket to newbucket without downloading and
re-uploading.
-Remotes which don't support server side copy (eg local) WILL download
-and re-upload in this case.
+Remotes which don't support server side copy WILL download and re-upload
+in this case.
Server side copies are used with sync and copy and will be identified in
-the log when using the -v flag.
+the log when using the -v flag. The may also be used with move if the
+remote doesn't support server side move.
Server side copies will only be attempted if the remote names are the
same.
@@ -931,15 +1065,59 @@ Options which use SIZE use kByte by default. However a suffix of b for
bytes, k for kBytes, M for MBytes and G for GBytes may be used. These
are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
---bwlimit=SIZE
+--backup-dir=DIR
-Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is 0
-which means to not limit bandwidth.
+When using sync, copy or move any files which would have been
+overwritten or deleted are moved in their original hierarchy into this
+directory.
+
+If --suffix is set, then the moved files will have the suffix added to
+them. If there is a file with the same path (after the suffix has been
+added) in DIR, then it will be overwritten.
+
+The remote in use must support server side move or copy and you must use
+the same remote as the destination of the sync. The backup directory
+must not overlap the destination directory.
+
+For example
+
+ rclone sync /path/to/local remote:current --backup-dir remote:old
+
+will sync /path/to/local to remote:current, but for any files which
+would have been updated or deleted will be stored in remote:old.
+
+If running rclone from a script you might want to use today's date as
+the directory name passed to --backup-dir to store the old files, or you
+might want to pass --suffix with today's date.
+
+--bwlimit=BANDWIDTH_SPEC
+
+This option controls the bandwidth limit. Limits can be specified in two
+ways: As a single limit, or as a timetable.
+
+Single limits last for the duration of the session. To use a single
+limit, specify the desired bandwidth in kBytes/s, or use a suffix
+b|k|M|G. The default is 0 which means to not limit bandwidth.
For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
-This only limits the bandwidth of the data transfer, it doesn't limit
-the bandwith of the directory listings etc.
+It is also possible to specify a "timetable" of limits, which will cause
+certain limits to be applied at certain times. To specify a timetable,
+format your entries as "HH:MM,BANDWIDTH HH:MM,BANDWITH...".
+
+An example of a typical timetable to avoid link saturation during
+daytime working hours could be:
+
+--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
+
+In this example, the transfer bandwidth will be set to 512kBytes/sec at
+8am. At noon, it will raise to 10Mbytes/s, and drop back to
+512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to
+30MBytes/s, and at 11pm it will be completely disabled (full speed).
+Anything between 11pm and 8am will remain unlimited.
+
+Bandwidth limits only apply to the data transfer. The don't apply to the
+bandwith of the directory listings etc.
Note that the units are Bytes/s not Bits/s. Typically connections are
measured in Bits/s - to convert divide by 8. For example let's say you
@@ -947,6 +1125,13 @@ have a 10 Mbit/s connection and you wish rclone to use half of it - 5
Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
+--buffer-size=SIZE
+
+Use this sized buffer to speed up file transfers. Each --transfer will
+use this much memory for buffering.
+
+Set to 0 to disable the buffering for the minimum memory use.
+
--checkers=N
The number of checkers to run in parallel. Checkers do the equality
@@ -977,10 +1162,17 @@ are incorrect as it would normally.
--config=CONFIG_FILE
-Specify the location of the rclone config file. Normally this is in your
-home directory as a file called .rclone.conf. If you run rclone -h and
-look at the help for the --config option you will see where the default
-location is for you. Use this flag to override the config location, eg
+Specify the location of the rclone config file.
+
+Normally the config file is in your home directory as a file called
+.config/rclone/rclone.conf (or .rclone.conf if created with an older
+version). If $XDG_CONFIG_HOME is set it will be at
+$XDG_CONFIG_HOME/rclone/rclone.conf
+
+If you run rclone -h and look at the help for the --config option you
+will see where the default location is for you.
+
+Use this flag to override the config location, eg
rclone --config=".myconfig" .config.
--contimeout=TIME
@@ -1004,6 +1196,15 @@ Do a trial run with no permanent changes. Use this to see what rclone
would do without actually doing it. Useful when setting up the sync
command which deletes files in the destination.
+--ignore-checksum
+
+Normally rclone will check that the checksums of transferred files
+match, and give an error "corrupted on transfer" if they don't.
+
+You can use this option to skip that check. You should only use it if
+you have had the "corrupted on transfer" error message and you are sure
+you might want to transfer potentially corrupted data.
+
--ignore-existing
Using this option will make rclone unconditionally skip all files that
@@ -1042,6 +1243,22 @@ Log all of rclone's output to FILE. This is not active by default. This
can be useful for tracking down problems with syncs in combination with
the -v flag. See the Logging section for more info.
+--log-level LEVEL
+
+This sets the log level for rclone. The default log level is INFO.
+
+DEBUG is equivalent to -vv. It outputs lots of debug info - useful for
+bug reports and really finding out what rclone is doing.
+
+INFO is equivalent to -v. It outputs information about each transfer and
+prints stats once a minute by default.
+
+NOTICE is the default log level if no logging flags are supplied. It
+outputs very little when things are working normally. It outputs
+warnings and significant events.
+
+ERROR is equivalent to -q. It only output error messages.
+
--low-level-retries NUMBER
This controls the number of low level retries rclone does.
@@ -1154,6 +1371,50 @@ The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals
The default is bytes.
+--suffix=SUFFIX
+
+This is for use with --backup-dir only. If this isn't set then
+--backup-dir will move files with their original name. If it is set then
+the files will have SUFFIX added on to them.
+
+See --backup-dir for more info.
+
+--syslog
+
+On capable OSes (not Windows or Plan9) send all log output to syslog.
+
+This can be useful for running rclone in script or rclone mount.
+
+--syslog-facility string
+
+If using --syslog this sets the syslog facility (eg KERN, USER). See
+man syslog for a list of possible facilities. The default facility is
+DAEMON.
+
+--track-renames
+
+By default rclone doesn't not keep track of renamed files, so if you
+rename a file locally then sync it to a remote, rclone will delete the
+old file on the remote and upload a new copy.
+
+If you use this flag, and the remote supports server side copy or server
+side move, and the source and destination have a compatible hash, then
+this will track renames during sync, copy, and move operations and
+perform renaming server-side.
+
+Files will be matched by size and hash - if both match then a rename
+will be considered.
+
+If the destination does not support server-side copy or move, rclone
+will fall back to the default behaviour and log an error level message
+to the console.
+
+Note that --track-renames is incompatible with --no-traverse and that it
+uses extra memory to keep track of all the rename candidates.
+
+Note also that --track-renames is incompatible with --delete-before and
+will select --delete-after instead of --delete-during.
+
--delete-(before,during,after)
This option allows you to specify when files on your destination are
@@ -1161,16 +1422,20 @@ deleted when you sync folders.
Specifying the value --delete-before will delete all files present on
the destination, but not on the source _before_ starting the transfer of
-any new or updated files. This uses extra memory as it has to store the
-source listing before proceeding.
+any new or updated files. This uses two passes through the file systems,
+one for the deletions and one for the copies.
-Specifying --delete-during (default value) will delete files while
-checking and uploading files. This is usually the fastest option.
-Currently this works the same as --delete-after but it may change in the
-future.
+Specifying --delete-during will delete files while checking and
+uploading files. This is the fastest option and uses the least memory.
-Specifying --delete-after will delay deletion of files until all
-new/updated files have been successfully transfered.
+Specifying --delete-after (the default value) will delay deletion of
+files until all new/updated files have been successfully transfered. The
+files to be deleted are collected in the copy pass then deleted after
+the copy pass has completed sucessfully. The files to be deleted are
+held in memory so this mode may use more memory. This is the safest mode
+as it will only delete files if there have been no errors subsequent to
+that. If there have been errors before the deletions start then you will
+get the message not deleting files as there were IO errors.
--timeout=TIME
@@ -1206,12 +1471,14 @@ This can be useful when transferring to a remote which doesn't support
mod times directly as it is more accurate than a --size-only check and
faster than using --checksum.
--v, --verbose
+-v, -vv, --verbose
-If you set this flag, rclone will become very verbose telling you about
-every file it considers and transfers.
+With -v rclone will tell you about each file that is transferred and a
+small number of significant events.
-Very useful for debugging.
+With -vv rclone will become very verbose telling you about every file it
+considers and transfers. Please send bug reports with a log with this
+setting.
-V, --version
@@ -1349,7 +1616,8 @@ THIS SHOULD BE USED ONLY FOR TESTING.
--no-traverse
The --no-traverse flag controls whether the destination file system is
-traversed when using the copy or move commands.
+traversed when using the copy or move commands. --no-traverse is not
+compatible with sync and will be ignored if you supply it with sync.
If you are only copying a small number of files and/or have a large
number of files on the destination then --no-traverse will stop rclone
@@ -1388,40 +1656,114 @@ See the filtering section.
Logging
-rclone has 3 levels of logging, Error, Info and Debug.
+rclone has 4 levels of logging, Error, Notice, Info and Debug.
-By default rclone logs Error and Info to standard error and Debug to
-standard output. This means you can redirect standard output and
-standard error to different places.
+By default rclone logs to standard error. This means you can redirect
+standard error and still see the normal output of rclone commands (eg
+rclone ls).
-By default rclone will produce Error and Info level messages.
+By default rclone will produce Error and Notice level messages.
If you use the -q flag, rclone will only produce Error messages.
-If you use the -v flag, rclone will produce Error, Info and Debug
+If you use the -v flag, rclone will produce Error, Notice and Info
messages.
+If you use the -vv flag, rclone will produce Error, Notice, Info and
+Debug messages.
+
+You can also control the log levels with the --log-level flag.
+
If you use the --log-file=FILE option, rclone will redirect Error, Info
and Debug messages along with standard error to FILE.
+If you use the --syslog flag then rclone will log to syslog and the
+--syslog-facility control which facility it uses.
+
+Rclone prefixes all log messages with their level in capitals, eg INFO
+which makes it easy to grep the log file for different kinds of
+information.
+
Exit Code
-If any errors occurred during the command, rclone with an exit code of
-1. This allows scripts to detect when rclone operations have failed.
+If any errors occurred during the command, rclone will exit with a
+non-zero exit code. This allows scripts to detect when rclone operations
+have failed.
During the startup phase rclone will exit immediately if an error is
detected in the configuration. There will always be a log message
immediately before exiting.
When rclone is running it will accumulate errors as it goes along, and
-only exit with an non-zero exit code if (after retries) there were no
-transfers with errors remaining. For every error counted there will be a
-high priority log message (visibile with -q) showing the message and
-which file caused the problem. A high priority message is also shown
-when starting a retry so the user can see that any previous error
-messages may not be valid after the retry. If rclone has done a retry it
-will log a high priority message if the retry was successful.
+only exit with an non-zero exit code if (after retries) there were still
+failed transfers. For every error counted there will be a high priority
+log message (visibile with -q) showing the message and which file caused
+the problem. A high priority message is also shown when starting a retry
+so the user can see that any previous error messages may not be valid
+after the retry. If rclone has done a retry it will log a high priority
+message if the retry was successful.
+
+
+Environment Variables
+
+Rclone can be configured entirely using environment variables. These can
+be used to set defaults for options or config file entries.
+
+Options
+
+Every option in rclone can have its default set by environment variable.
+
+To find the name of the environment variable, first take the long option
+name, strip the leading --, change - to _, make upper case and prepend
+RCLONE_.
+
+For example to always set --stats 5s, set the environment variable
+RCLONE_STATS=5s. If you set stats on the command line this will override
+the environment variable setting.
+
+Or to always use the trash in drive --drive-use-trash, set
+RCLONE_DRIVE_USE_TRASH=true.
+
+The same parser is used for the options and the environment variables so
+they take exactly the same form.
+
+Config file
+
+You can set defaults for values in the config file on an individual
+remote basis. If you want to use this feature, you will need to discover
+the name of the config items that you want. The easiest way is to run
+through rclone config by hand, then look in the config file to see what
+the values are (the config file can be found by looking at the help for
+--config in rclone help).
+
+To find the name of the environment variable, you need to set, take
+RCLONE_ + name of remote + _ + name of config file option and make it
+all uppercase.
+
+For example to configure an S3 remote named mys3: without a config file
+(using unix ways of setting environment variables):
+
+ $ export RCLONE_CONFIG_MYS3_TYPE=s3
+ $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
+ $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
+ $ rclone lsd MYS3:
+ -1 2016-09-21 12:54:21 -1 my-bucket
+ $ rclone listremotes | grep mys3
+ mys3:
+
+Note that if you want to create a remote using environment variables you
+must create the ..._TYPE variable as above.
+
+Other environment variables
+
+- RCLONE_CONFIG_PASS` set to contain your config file password (see
+ Configuration Encryption section)
+- HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase
+ versions thereof).
+ - HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
+ - The environment values may be either a complete URL or a
+ "host[:port]" for, in which case the "http" scheme is assumed.
@@ -1944,6 +2286,7 @@ Here is an overview of the major features of each cloud storage system.
Hubic MD5 Yes No No R/W
Backblaze B2 SHA1 Yes No No R/W
Yandex Disk MD5 Yes No No R/W
+ SFTP - Yes Depends No -
The local filesystem All Yes Depends No -
Hash
@@ -1976,7 +2319,8 @@ This can cause problems when syncing between a case insensitive system
and a case sensitive system. The symptom of this is that no matter how
many times you run the sync it never completes fully.
-The local filesystem may or may not be case sensitive depending on OS.
+The local filesystem and SFTP may or may not be case sensitive depending
+on OS.
- Windows - usually case insensitive, though case is preserved
- OSX - usually case insensitive, though it is possible to format case
@@ -2018,19 +2362,20 @@ All the remotes support a basic set of features, but there are some
optional features supported by some remotes used to make some operations
more efficient.
- Name Purge Copy Move DirMove CleanUp
- ---------------------- ------- ------ --------- --------- ---------
- Google Drive Yes Yes Yes Yes No #575
- Amazon S3 No Yes No No No
- Openstack Swift Yes † Yes No No No
- Dropbox Yes Yes Yes Yes No #575
- Google Cloud Storage Yes Yes No No No
- Amazon Drive Yes No Yes Yes No #575
- Microsoft One Drive Yes Yes No #197 No #197 No #575
- Hubic Yes † Yes No No No
- Backblaze B2 No No No No Yes
- Yandex Disk Yes No No No No #575
- The local filesystem Yes No Yes Yes No
+ Name Purge Copy Move DirMove CleanUp
+ ---------------------- ------- ------ ------ --------- ---------
+ Google Drive Yes Yes Yes Yes No #575
+ Amazon S3 No Yes No No No
+ Openstack Swift Yes † Yes No No No
+ Dropbox Yes Yes Yes Yes No #575
+ Google Cloud Storage Yes Yes No No No
+ Amazon Drive Yes No Yes Yes No #575
+ Microsoft One Drive Yes Yes Yes No #197 No #575
+ Hubic Yes † Yes No No No
+ Backblaze B2 No No No No Yes
+ Yandex Disk Yes No No No No #575
+ SFTP No No Yes Yes No
+ The local filesystem Yes No Yes Yes No
Purge
@@ -2099,31 +2444,35 @@ This will guide you through an interactive setup process:
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
- Storage> 6
+ Storage> 7
Google Application Client Id - leave blank normally.
- client_id>
+ client_id>
Google Application Client Secret - leave blank normally.
- client_secret>
+ client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -2137,8 +2486,8 @@ This will guide you through an interactive setup process:
Got code
--------------------
[remote]
- client_id =
- client_secret =
+ client_id =
+ client_secret =
token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
--------------------
y) Yes this is OK
@@ -2335,6 +2684,11 @@ Here are the possible extensions with their corresponding mime types.
CSS
-------------------------------------
+--drive-skip-gdocs
+
+Skip google documents in all listings. If given, gdocs practically
+become invisible to rclone.
+
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited
@@ -2400,25 +2754,29 @@ This will guide you through an interactive setup process.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
Storage> 2
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
@@ -2459,21 +2817,27 @@ This will guide you through an interactive setup process.
/ Asia Pacific (Tokyo) Region
8 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
+ / Asia Pacific (Seoul)
+ 9 | Needs location constraint ap-northeast-2.
+ \ "ap-northeast-2"
+ / Asia Pacific (Mumbai)
+ 10 | Needs location constraint ap-south-1.
+ \ "ap-south-1"
/ South America (Sao Paulo) Region
- 9 | Needs location constraint sa-east-1.
+ 11 | Needs location constraint sa-east-1.
\ "sa-east-1"
/ If using an S3 clone that only understands v2 signatures
- 10 | eg Ceph/Dreamhost
+ 12 | eg Ceph/Dreamhost
| set this and make sure you set the endpoint.
\ "other-v2-signature"
/ If using an S3 clone that understands v4 signatures set this
- 11 | and make sure you set the endpoint.
+ 13 | and make sure you set the endpoint.
\ "other-v4-signature"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
- endpoint>
+ endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
@@ -2492,7 +2856,11 @@ This will guide you through an interactive setup process.
\ "ap-southeast-2"
8 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
- 9 / South America (Sao Paulo) Region.
+ 9 / Asia Pacific (Seoul)
+ \ "ap-northeast-2"
+ 10 / Asia Pacific (Mumbai)
+ \ "ap-south-1"
+ 11 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
@@ -2539,8 +2907,11 @@ This will guide you through an interactive setup process.
access_key_id = access_key
secret_access_key = secret_key
region = us-east-1
- endpoint =
- location_constraint =
+ endpoint =
+ location_constraint =
+ acl = private
+ server_side_encryption =
+ storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -2731,7 +3102,7 @@ important to put the region in as stated above.
secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region> us-east-1
endpoint> http://10.0.0.3:9000
- location_constraint>
+ location_constraint>
server_side_encryption>
Which makes the config file look like this
@@ -2742,8 +3113,8 @@ Which makes the config file look like this
secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region = us-east-1
endpoint = http://10.0.0.3:9000
- location_constraint =
- server_side_encryption =
+ location_constraint =
+ server_side_encryption =
Minio doesn't support all the features of S3 yet. In particular it
doesn't support MD5 checksums (ETags) or metadata. This means rclone
@@ -2782,27 +3153,31 @@ This will guide you through an interactive setup process.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
- Storage> 10
+ Storage> 11
User name to log in.
user> user_name
API key or password.
@@ -2824,25 +3199,28 @@ This will guide you through an interactive setup process.
auth> 1
User domain - optional (v3 auth)
domain> Default
- Tenant name - optional
- tenant>
+ Tenant name - optional for v1 auth, required otherwise
+ tenant> tenant_name
Tenant domain - optional (v3 auth)
tenant_domain>
Region name - optional
- region>
+ region>
Storage URL - optional
- storage_url>
- Remote config
+ storage_url>
AuthVersion - optional - set to (1,2,3) if your auth URL has no version
- auth_version>
+ auth_version>
+ Remote config
--------------------
[remote]
user = user_name
key = password_or_api_key
auth = https://auth.api.rackspacecloud.com/v1.0
- tenant =
- region =
- storage_url =
+ domain = Default
+ tenant =
+ tenant_domain =
+ region =
+ storage_url =
+ auth_version =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -2966,39 +3344,43 @@ This will guide you through an interactive setup process:
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
Storage> 4
Dropbox App Key - leave blank normally.
- app_key>
+ app_key>
Dropbox App Secret - leave blank normally.
- app_secret>
+ app_secret>
Remote config
Please visit:
https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
--------------------
[remote]
- app_key =
- app_secret =
+ app_key =
+ app_secret =
token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
--------------------
y) Yes this is OK
@@ -3086,65 +3468,68 @@ This will guide you through an interactive setup process:
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
- Storage> 5
+ Storage> 6
Google Application Client Id - leave blank normally.
- client_id>
+ client_id>
Google Application Client Secret - leave blank normally.
- client_secret>
+ client_secret>
Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number> 12345678
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
- service_account_file>
+ service_account_file>
Access Control List for new objects.
Choose a number from below, or type in your own value
- * Object owner gets OWNER access, and all Authenticated Users get READER access.
- 1) authenticatedRead
- * Object owner gets OWNER access, and project team owners get OWNER access.
- 2) bucketOwnerFullControl
- * Object owner gets OWNER access, and project team owners get READER access.
- 3) bucketOwnerRead
- * Object owner gets OWNER access [default if left blank].
- 4) private
- * Object owner gets OWNER access, and project team members get access according to their roles.
- 5) projectPrivate
- * Object owner gets OWNER access, and all Users get READER access.
- 6) publicRead
+ 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
+ \ "authenticatedRead"
+ 2 / Object owner gets OWNER access, and project team owners get OWNER access.
+ \ "bucketOwnerFullControl"
+ 3 / Object owner gets OWNER access, and project team owners get READER access.
+ \ "bucketOwnerRead"
+ 4 / Object owner gets OWNER access [default if left blank].
+ \ "private"
+ 5 / Object owner gets OWNER access, and project team members get access according to their roles.
+ \ "projectPrivate"
+ 6 / Object owner gets OWNER access, and all Users get READER access.
+ \ "publicRead"
object_acl> 4
Access Control List for new buckets.
Choose a number from below, or type in your own value
- * Project team owners get OWNER access, and all Authenticated Users get READER access.
- 1) authenticatedRead
- * Project team owners get OWNER access [default if left blank].
- 2) private
- * Project team members get access according to their roles.
- 3) projectPrivate
- * Project team owners get OWNER access, and all Users get READER access.
- 4) publicRead
- * Project team owners get OWNER access, and all Users get WRITER access.
- 5) publicReadWrite
+ 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
+ \ "authenticatedRead"
+ 2 / Project team owners get OWNER access [default if left blank].
+ \ "private"
+ 3 / Project team members get access according to their roles.
+ \ "projectPrivate"
+ 4 / Project team owners get OWNER access, and all Users get READER access.
+ \ "publicRead"
+ 5 / Project team owners get OWNER access, and all Users get WRITER access.
+ \ "publicReadWrite"
bucket_acl> 2
Remote config
- Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine or Y didn't work
@@ -3158,8 +3543,8 @@ This will guide you through an interactive setup process:
--------------------
[remote]
type = google cloud storage
- client_id =
- client_secret =
+ client_id =
+ client_secret =
token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
project_number = 12345678
object_acl = private
@@ -3247,40 +3632,50 @@ This will guide you through an interactive setup process:
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
Storage> 1
Amazon Application Client Id - leave blank normally.
- client_id>
+ client_id>
Amazon Application Client Secret - leave blank normally.
- client_secret>
+ client_secret>
Remote config
+ Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+ y) Yes
+ n) No
+ y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
- client_id =
- client_secret =
+ client_id =
+ client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
--------------------
y) Yes this is OK
@@ -3324,7 +3719,8 @@ Deleting files
Any files you delete with rclone will end up in the trash. Amazon don't
provide an API to permanently delete files, nor to empty the trash, so
you will have to do that with one of Amazon's apps or via the Amazon
-Drive website.
+Drive website. As of November 17, 2016, files are automatically deleted
+by Amazon from the trash after 30 days.
Using with non .com Amazon accounts
@@ -3392,13 +3788,13 @@ the maximum size of uploaded files. Note that --max-size does not split
files into segments, it only ignores files over this size.
-Microsoft One Drive
+Microsoft OneDrive
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory.
-The initial setup for One Drive involves getting a token from Microsoft
+The initial setup for OneDrive involves getting a token from Microsoft
which you need to do in your browser. rclone config walks you through
it.
@@ -3417,31 +3813,35 @@ This will guide you through an interactive setup process:
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
- Storage> 9
+ Storage> 10
Microsoft App Client Id - leave blank normally.
- client_id>
+ client_id>
Microsoft App Client Secret - leave blank normally.
- client_secret>
+ client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -3455,8 +3855,8 @@ This will guide you through an interactive setup process:
Got code
--------------------
[remote]
- client_id =
- client_secret =
+ client_id =
+ client_secret =
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
@@ -3475,21 +3875,21 @@ unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
-List directories in top level of your One Drive
+List directories in top level of your OneDrive
rclone lsd remote:
-List all the files in your One Drive
+List all the files in your OneDrive
rclone ls remote:
-To copy a local directory to an One Drive directory called backup
+To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
Modified time and hashes
-One Drive allows modification times to be set on objects accurate to 1
+OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
@@ -3500,7 +3900,7 @@ Deleting files
Any files you delete with rclone will end up in the trash. Microsoft
doesn't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Microsoft's apps or via
-the One Drive website.
+the OneDrive website.
Specific options
@@ -3518,14 +3918,14 @@ is 10MB.
Limitations
-Note that One Drive is case insensitive so you can't have a file called
+Note that OneDrive is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
-Rclone only supports your default One Drive, and doesn't work with One
+Rclone only supports your default OneDrive, and doesn't work with One
Drive for business. Both these issues may be fixed at some point
depending on user demand!
-There are quite a few characters that can't be in One Drive file names.
+There are quite a few characters that can't be in OneDrive file names.
These can't occur on Windows platforms, but on non-Windows platforms
they are common. Rclone will map these names to and from an identical
looking unicode equivalent. For example if a file has a ? in it will be
@@ -3559,31 +3959,35 @@ This will guide you through an interactive setup process:
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
- Storage> 7
+ Storage> 8
Hubic Client Id - leave blank normally.
- client_id>
+ client_id>
Hubic Client Secret - leave blank normally.
- client_secret>
+ client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -3597,8 +4001,8 @@ This will guide you through an interactive setup process:
Got code
--------------------
[remote]
- client_id =
- client_secret =
+ client_id =
+ client_secret =
token = {"access_token":"XXXXXX"}
--------------------
y) Yes this is OK
@@ -3679,25 +4083,29 @@ which you can get from the b2 control panel.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
Storage> 3
Account ID
@@ -3705,13 +4113,13 @@ which you can get from the b2 control panel.
Application Key
key> 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
- endpoint>
+ endpoint>
Remote config
--------------------
[remote]
account = 123456789abc
key = 0123456789abcdef0123456789abcdef0123456789
- endpoint =
+ endpoint =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -3944,31 +4352,35 @@ This will guide you through an interactive setup process:
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph)
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
- 5 / Google Cloud Storage (this is not Google Drive)
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 6 / Google Drive
+ 7 / Google Drive
\ "drive"
- 7 / Hubic
+ 8 / Hubic
\ "hubic"
- 8 / Local Disk
+ 9 / Local Disk
\ "local"
- 9 / Microsoft OneDrive
+ 10 / Microsoft OneDrive
\ "onedrive"
- 10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 11 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
- Storage> 11
+ Storage> 13
Yandex Client Id - leave blank normally.
- client_id>
+ client_id>
Yandex Client Secret - leave blank normally.
- client_secret>
+ client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -3982,8 +4394,8 @@ This will guide you through an interactive setup process:
Got code
--------------------
[remote]
- client_id =
- client_secret =
+ client_id =
+ client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
--------------------
y) Yes this is OK
@@ -4029,6 +4441,125 @@ MD5 checksums
MD5 checksums are natively supported by Yandex Disk.
+SFTP
+
+SFTP is the Secure (or SSH) File Transfer Protocol.
+
+It runs over SSH v2 and is standard with most modern SSH installations.
+
+Paths are specified as remote:path. If the path does not begin with a /
+it is relative to the home directory of the user. An empty path remote:
+refers to the users home directory.
+
+Here is an example of making a SFTP configuration. First run
+
+ rclone config
+
+This will guide you through an interactive setup process. You will need
+your account number (a short hex number) and key (a long hex number)
+which you can get from the SFTP control panel.
+
+ No remotes found - make a new one
+ n) New remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ n/r/c/s/q> n
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 7 / Google Drive
+ \ "drive"
+ 8 / Hubic
+ \ "hubic"
+ 9 / Local Disk
+ \ "local"
+ 10 / Microsoft OneDrive
+ \ "onedrive"
+ 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
+ \ "yandex"
+ Storage> 12
+ SSH host to connect to
+ Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "example.com"
+ host> example.com
+ SSH username, leave blank for current username, ncw
+ user>
+ SSH port
+ port>
+ SSH password, leave blank to use ssh-agent
+ y) Yes type in my own password
+ g) Generate random password
+ n) No leave this optional password blank
+ y/g/n> n
+ Remote config
+ --------------------
+ [remote]
+ host = example.com
+ user =
+ port =
+ pass =
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+This remote is called remote and can now be used like this
+
+See all directories in the home directory
+
+ rclone lsd remote:
+
+Make a new directory
+
+ rclone mkdir remote:path/to/directory
+
+List the contents of a directory
+
+ rclone ls remote:path/to/directory
+
+Sync /home/local/directory to the remote directory, deleting any excess
+files in the directory.
+
+ rclone sync /home/local/directory remote:directory
+
+Modified time
+
+Modified times are stored on the server to 1 second precision.
+
+Modified times are used in syncing and are fully supported.
+
+Limitations
+
+SFTP does not support any checksums.
+
+SFTP isn't supported under plan9 until this issue is fixed.
+
+Note that since SFTP isn't HTTP based the following flags don't work
+with it: --dump-headers, --dump-bodies, --dump-auth
+
+Note that --timeout isn't supported (but --contimeout is).
+
+
Crypt
The crypt remote encrypts and decrypts another remote.
@@ -4079,12 +4610,14 @@ differentiate it from the remote.
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 12 / Yandex Disk
+ 12 / SSH/SFTP Connection
+ \ "sftp"
+ 13 / Yandex Disk
\ "yandex"
Storage> 5
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
- "myremote:bucket" or "myremote:"
+ "myremote:bucket" or maybe "myremote:" (not recommended).
remote> remote:path
How to encrypt the filenames.
Choose a number from below, or type in your own value
@@ -4139,8 +4672,10 @@ if you reconfigure rclone with the same passwords/passphrases elsewhere
it will be compatible - all the secrets used are derived from those two
passwords/passphrases.
-Note that rclone does not encrypt * file length - this can be calcuated
-within 16 bytes * modification time - used for syncing
+Note that rclone does not encrypt
+
+- file length - this can be calcuated within 16 bytes
+- modification time - used for syncing
Specifying the remote
@@ -4220,13 +4755,20 @@ File name encryption modes
Here are some of the features of the file name encryption modes
-Off * doesn't hide file names or directory structure * allows for longer
-file names (~246 characters) * can use sub paths and copy single files
+Off
-Standard * file names encrypted * file names can't be as long (~156
-characters) * can use sub paths and copy single files * directory
-structure visibile * identical files names will have identical uploaded
-names * can use shortcuts to shorten the directory recursion
+- doesn't hide file names or directory structure
+- allows for longer file names (~246 characters)
+- can use sub paths and copy single files
+
+Standard
+
+- file names encrypted
+- file names can't be as long (~156 characters)
+- can use sub paths and copy single files
+- directory structure visibile
+- identical files names will have identical uploaded names
+- can use shortcuts to shorten the directory recursion
Cloud storage systems have various limits on file name length and total
path length which you are more likely to hit using "Standard" file name
@@ -4244,6 +4786,50 @@ depends on that.
Hashes are not stored for crypt. However the data integrity is protected
by an extremely strong crypto authenticator.
+Note that you should use the rclone cryptcheck command to check the
+integrity of a crypted remote instead of rclone check which can't check
+the checksums properly.
+
+Specific options
+
+Here are the command line options specific to this cloud storage system.
+
+--crypt-show-mapping
+
+If this flag is set then for each file that the remote is asked to list,
+it will log (at level INFO) a line stating the decrypted file name and
+the encrypted file name.
+
+This is so you can work out which encrypted names are which decrypted
+names just in case you need to do something with the encrypted file
+names, or for debugging purposes.
+
+
+Backing up a crypted remote
+
+If you wish to backup a crypted remote, it it recommended that you use
+rclone sync on the encrypted files, and make sure the passwords are the
+same in the new encrypted remote.
+
+This will have the following advantages
+
+- rclone sync will check the checksums while copying
+- you can use rclone check between the encrypted remotes
+- you don't decrypt and encrypt unecessarily
+
+For example, let's say you have your original remote at remote: with the
+encrypted version at eremote: with path remote:crypt. You would then set
+up the new remote remote2: and then the encrypted version eremote2: with
+path remote2:crypt using the same passwords as eremote:.
+
+To sync the two remotes you would do
+
+ rclone sync remote:crypt remote2:crypt
+
+And to check the integrity you would do
+
+ rclone check remote:crypt remote2:crypt
+
File formats
@@ -4416,6 +5002,41 @@ Specific options
Here are the command line options specific to local storage
+--copy-links, -L
+
+Normally rclone will ignore symlinks or junction points (which behave
+like symlinks under Windows).
+
+If you supply this flag then rclone will follow the symlink and copy the
+pointed to file or directory.
+
+This flag applies to all commands.
+
+For example, supposing you have a directory structure like this
+
+ $ tree /tmp/a
+ /tmp/a
+ ├── b -> ../b
+ ├── expected -> ../expected
+ ├── one
+ └── two
+ └── three
+
+Then you can see the difference with and without the flag like this
+
+ $ rclone ls /tmp/a
+ 6 one
+ 6 two/three
+
+and
+
+ $ rclone -L ls /tmp/a
+ 4174 expected
+ 6 one
+ 6 two/three
+ 6 b/two
+ 6 b/one
+
--one-file-system, -x
This tells rclone to stay in the filesystem specified by the root and
@@ -4453,6 +5074,102 @@ it isn't supported (eg Windows) it will not appear as an valid flag.
Changelog
+- v1.36 - 2017-03-18
+ - New Features
+ - SFTP remote (Jack Schmidt)
+ - Re-implement sync routine to work a directory at a time reducing
+ memory usage
+ - Logging revamped to be more inline with rsync - now much quieter
+ - -v only shows transfers
+ - -vv is for full debug
+ - --syslog to log to syslog on capable platforms
+ - Implement --backup-dir and --suffix
+ - Implement --track-renames (initial implementation by Bjørn
+ Erik Pedersen)
+ - Add time-based bandwidth limits (Lukas Loesche)
+ - rclone cryptcheck: checks integrity of crypt remotes
+ - Allow all config file variables and options to be set from
+ environment variables
+ - Add --buffer-size parameter to control buffer size for copy
+ - Make --delete-after the default
+ - Add --ignore-checksum flag (fixed by Hisham Zarka)
+ - rclone check: Add --download flag to check all the data, not
+ just hashes
+ - rclone cat: add --head, --tail, --offset, --count and --discard
+ - rclone config: when choosing from a list, allow the value to be
+ entered too
+ - rclone config: allow rename and copy of remotes
+ - rclone obscure: for generating encrypted passwords for rclone's
+ config (T.C. Ferguson)
+ - Comply with XDG Base Directory specification (Dario Giovannetti)
+ - this moves the default location of the config file in a
+ backwards compatible way
+ - Release changes
+ - Ubuntu snap support (Dedsec1)
+ - Compile with go 1.8
+ - MIPS/Linux big and little endian support
+ - Bug Fixes
+ - Fix copyto copying things to the wrong place if the destination
+ dir didn't exist
+ - Fix parsing of remotes in moveto and copyto
+ - Fix --delete-before deleting files on copy
+ - Fix --files-from with an empty file copying everything
+ - Fix sync: don't update mod times if --dry-run set
+ - Fix MimeType propagation
+ - Fix filters to add ** rules to directory rules
+ - Local
+ - Implement -L, --copy-links flag to allow rclone to follow
+ symlinks
+ - Open files in write only mode so rclone can write to an rclone
+ mount
+ - Fix unnormalised unicode causing problems reading directories
+ - Fix interaction between -x flag and --max-depth
+ - Mount
+ - Implement proper directory handling (mkdir, rmdir, renaming)
+ - Make include and exclude filters apply to mount
+ - Implement read and write async buffers - control with
+ --buffer-size
+ - Fix fsync on for directories
+ - Fix retry on network failure when reading off crypt
+ - Crypt
+ - Add --crypt-show-mapping to show encrypted file mapping
+ - Fix crypt writer getting stuck in a loop
+ - IMPORTANT this bug had the potential to cause data
+ corruption when
+ - reading data from a network based remote and
+ - writing to a crypt on Google Drive
+ - Use the cryptcheck command to validate your data if you are
+ concerned
+ - If syncing two crypt remotes, sync the unencrypted remote
+ - Amazon Drive
+ - Fix panics on Move (rename)
+ - Fix panic on token expiry
+ - B2
+ - Fix inconsistent listings and rclone check
+ - Fix uploading empty files with go1.8
+ - Constrain memory usage when doing multipart uploads
+ - Fix upload url not being refreshed properly
+ - Drive
+ - Fix Rmdir on directories with trashed files
+ - Fix "Ignoring unknown object" when downloading
+ - Add --drive-list-chunk
+ - Add --drive-skip-gdocs (Károly Oláh)
+ - OneDrive
+ - Implement Move
+ - Fix Copy
+ - Fix overwrite detection in Copy
+ - Fix waitForJob to parse errors correctly
+ - Use token renewer to stop auth errors on long uploads
+ - Fix uploading empty files with go1.8
+ - Google Cloud Storage
+ - Fix depth 1 directory listings
+ - Yandex
+ - Fix single level directory listing
+ - Dropbox
+ - Normalise the case for single level directory listings
+ - Fix depth 1 listing
+ - S3
+ - Added ca-central-1 region (Jon Yergatian)
- v1.35 - 2017-01-02
- New Features
- moveto and copyto commands for choosing a destination name on
@@ -5281,6 +5998,17 @@ Contributors
- 0xJAKE 0xJAKE@users.noreply.github.com
- Thibault Molleman thibaultmol@users.noreply.github.com
- Scott McGillivray scott.mcgillivray@gmail.com
+- Bjørn Erik Pedersen bjorn.erik.pedersen@gmail.com
+- Lukas Loesche lukas@mesosphere.io
+- emyarod allllaboutyou@gmail.com
+- T.C. Ferguson tcf909@gmail.com
+- Brandur brandur@mutelight.org
+- Dario Giovannetti dev@dariogiovannetti.net
+- Károly Oláh okaresz@aol.com
+- Jon Yergatian jon@macfanatic.ca
+- Jack Schmidt github@mowsey.org
+- Dedsec1 Dedsec1@users.noreply.github.com
+- Hisham Zarka hzarka@gmail.com
diff --git a/Makefile b/Makefile
index 08e8909af..ce7bf52e3 100644
--- a/Makefile
+++ b/Makefile
@@ -122,18 +122,20 @@ serve: website
tag: doc
@echo "Old tag is $(LAST_TAG)"
@echo "New tag is $(NEW_TAG)"
- echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)-DEV\"\n" | gofmt > fs/version.go
+ echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
perl -lpe 's/VERSION/${NEW_TAG}/g; s/DATE/'`date -I`'/g;' docs/content/downloads.md.in > docs/content/downloads.md
git tag $(NEW_TAG)
- @echo "Add this to changelog in docs/content/changelog.md"
- @echo " * $(NEW_TAG) -" `date -I`
- @git log $(LAST_TAG)..$(NEW_TAG) --oneline
+ @echo "Edit the new changelog in docs/content/changelog.md"
+ @echo " * $(NEW_TAG) -" `date -I` >> docs/content/changelog.md
+ @git log $(LAST_TAG)..$(NEW_TAG) --oneline >> docs/content/changelog.md
@echo "Then commit the changes"
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
@echo "And finally run make retag before make cross etc"
retag:
git tag -f $(LAST_TAG)
+ echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(LAST_TAG)-DEV\"\n" | gofmt > fs/version.go
+ git commit -m "Start $(LAST_TAG)-DEV development" fs/version.go
gen_tests:
cd fstest/fstests && go generate
diff --git a/RELEASE.md b/RELEASE.md
index 703e045e3..b5981ee2d 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -1,11 +1,6 @@
-Required software for making a release
+Extra required software for making a release
* [github-release](https://github.com/aktau/github-release) for uploading packages
- * [gox](https://github.com/mitchellh/gox) for cross compiling
- * Run `gox -build-toolchain`
- * This assumes you have your own source checkout
* pandoc for making the html and man pages
- * errcheck - go get github.com/kisielk/errcheck
- * golint - go get github.com/golang/lint
Making a release
* git status - make sure everything is checked in
@@ -16,6 +11,7 @@ Making a release
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
+ * # Update version number in snapcraft.yml
* git commit -a -v -m "Version v1.XX"
* make retag
* # Set the GOPATH for a current stable go compiler
@@ -23,6 +19,7 @@ Making a release
* make upload
* make upload_website
* git push --tags origin master
+ * git push --tags origin master:stable # update the stable branch for packager.io
* make upload_github
Early in the next release cycle update the vendored dependencies
@@ -31,3 +28,7 @@ Early in the next release cycle update the vendored dependencies
* git add new files
* carry forward any patches to vendor stuff
* git commit -a -v
+
+## Make version number go to -DEV and check in
+
+Make the version number be just in a file?
\ No newline at end of file
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 49fd4412e..c1c9adc99 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -7,6 +7,89 @@ date: "2016-11-06"
Changelog
---------
+ * v1.36 - 2017-03-18
+ * New Features
+ * SFTP remote (Jack Schmidt)
+ * Re-implement sync routine to work a directory at a time reducing memory usage
+ * Logging revamped to be more inline with rsync - now much quieter
+ * -v only shows transfers
+ * -vv is for full debug
+ * --syslog to log to syslog on capable platforms
+ * Implement --backup-dir and --suffix
+ * Implement --track-renames (initial implementation by Bjørn Erik Pedersen)
+ * Add time-based bandwidth limits (Lukas Loesche)
+ * rclone cryptcheck: checks integrity of crypt remotes
+ * Allow all config file variables and options to be set from environment variables
+ * Add --buffer-size parameter to control buffer size for copy
+ * Make --delete-after the default
+ * Add --ignore-checksum flag (fixed by Hisham Zarka)
+ * rclone check: Add --download flag to check all the data, not just hashes
+ * rclone cat: add --head, --tail, --offset, --count and --discard
+ * rclone config: when choosing from a list, allow the value to be entered too
+ * rclone config: allow rename and copy of remotes
+ * rclone obscure: for generating encrypted passwords for rclone's config (T.C. Ferguson)
+ * Comply with XDG Base Directory specification (Dario Giovannetti)
+ * this moves the default location of the config file in a backwards compatible way
+ * Release changes
+ * Ubuntu snap support (Dedsec1)
+ * Compile with go 1.8
+ * MIPS/Linux big and little endian support
+ * Bug Fixes
+ * Fix copyto copying things to the wrong place if the destination dir didn't exist
+ * Fix parsing of remotes in moveto and copyto
+ * Fix --delete-before deleting files on copy
+ * Fix --files-from with an empty file copying everything
+ * Fix sync: don't update mod times if --dry-run set
+ * Fix MimeType propagation
+ * Fix filters to add ** rules to directory rules
+ * Local
+ * Implement -L, --copy-links flag to allow rclone to follow symlinks
+ * Open files in write only mode so rclone can write to an rclone mount
+ * Fix unnormalised unicode causing problems reading directories
+ * Fix interaction between -x flag and --max-depth
+ * Mount
+ * Implement proper directory handling (mkdir, rmdir, renaming)
+ * Make include and exclude filters apply to mount
+ * Implement read and write async buffers - control with --buffer-size
+ * Fix fsync on for directories
+ * Fix retry on network failure when reading off crypt
+ * Crypt
+ * Add --crypt-show-mapping to show encrypted file mapping
+ * Fix crypt writer getting stuck in a loop
+ * **IMPORTANT** this bug had the potential to cause data corruption when
+ * reading data from a network based remote and
+ * writing to a crypt on Google Drive
+ * Use the cryptcheck command to validate your data if you are concerned
+ * If syncing two crypt remotes, sync the unencrypted remote
+ * Amazon Drive
+ * Fix panics on Move (rename)
+ * Fix panic on token expiry
+ * B2
+ * Fix inconsistent listings and rclone check
+ * Fix uploading empty files with go1.8
+ * Constrain memory usage when doing multipart uploads
+ * Fix upload url not being refreshed properly
+ * Drive
+ * Fix Rmdir on directories with trashed files
+ * Fix "Ignoring unknown object" when downloading
+ * Add --drive-list-chunk
+ * Add --drive-skip-gdocs (Károly Oláh)
+ * OneDrive
+ * Implement Move
+ * Fix Copy
+ * Fix overwrite detection in Copy
+ * Fix waitForJob to parse errors correctly
+ * Use token renewer to stop auth errors on long uploads
+ * Fix uploading empty files with go1.8
+ * Google Cloud Storage
+ * Fix depth 1 directory listings
+ * Yandex
+ * Fix single level directory listing
+ * Dropbox
+ * Normalise the case for single level directory listings
+ * Fix depth 1 listing
+ * S3
+ * Added ca-central-1 region (Jon Yergatian)
* v1.35 - 2017-01-02
* New Features
* moveto and copyto commands for choosing a destination name on copy/move
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 100310c5a..fb40fc8eb 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -1,12 +1,12 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
-Sync files and directories to and from local and remote object stores - v1.35-DEV
+Sync files and directories to and from local and remote object stores - v1.36
### Synopsis
@@ -57,12 +57,16 @@ rclone
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -71,6 +75,8 @@ rclone
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -84,12 +90,14 @@ rclone
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -102,6 +110,7 @@ rclone
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -112,11 +121,15 @@ rclone
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
-V, --version Print the version number
```
@@ -128,6 +141,7 @@ rclone
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
+* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integritity of a crypted remote.
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone.
@@ -141,6 +155,7 @@ rclone
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
* [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest.
+* [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone.conf
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove any empty directoryies under the path.
@@ -149,4 +164,4 @@ rclone
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone version](/commands/rclone_version/) - Show the version number.
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index d14a023fd..d4810141a 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -30,12 +30,16 @@ rclone authorize
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -44,6 +48,8 @@ rclone authorize
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -57,12 +63,14 @@ rclone authorize
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -75,6 +83,7 @@ rclone authorize
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -85,14 +94,18 @@ rclone authorize
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md
index 12dd2a61f..66b057fbf 100644
--- a/docs/content/commands/rclone_cat.md
+++ b/docs/content/commands/rclone_cat.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -26,11 +26,26 @@ Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
+Use the --head flag to print characters only at the start, --tail for
+the end and --offset and --count to print a section in the middle.
+Note that if offset is negative it will count from the end, so
+--offset -1 --count 1 is equivalent to --tail 1.
+
```
rclone cat remote:path
```
+### Options
+
+```
+ --count int Only print N characters. (default -1)
+ --discard Discard the output instead of printing.
+ --head int Only print the first N characters.
+ --offset int Start printing at offset N (or from end if -ve).
+ --tail int Only print the last N characters.
+```
+
### Options inherited from parent commands
```
@@ -41,12 +56,16 @@ rclone cat remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -55,6 +74,8 @@ rclone cat remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -68,12 +89,14 @@ rclone cat remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -86,6 +109,7 @@ rclone cat remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -96,14 +120,18 @@ rclone cat remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md
index 5fcc203c0..d8e52c2f8 100644
--- a/docs/content/commands/rclone_check.md
+++ b/docs/content/commands/rclone_check.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -12,17 +12,29 @@ Checks the files in the source and destination match.
-Checks the files in the source and destination match. It
-compares sizes and MD5SUMs and prints a report of files which
-don't match. It doesn't alter the source or destination.
+Checks the files in the source and destination match. It compares
+sizes and hashes (MD5 or SHA1) and logs a report of files which don't
+match. It doesn't alter the source or destination.
-`--size-only` may be used to only compare the sizes, not the MD5SUMs.
+If you supply the --size-only flag, it will only compare the sizes not
+the hashes as well. Use this for a quick check.
+
+If you supply the --download flag, it will download the data from
+both remotes and check them against each other on the fly. This can
+be useful for remotes that don't support hashes or if you really want
+to check all the data.
```
rclone check source:path dest:path
```
+### Options
+
+```
+ --download Check by downloading rather than with hash.
+```
+
### Options inherited from parent commands
```
@@ -33,12 +45,16 @@ rclone check source:path dest:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -47,6 +63,8 @@ rclone check source:path dest:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -60,12 +78,14 @@ rclone check source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -78,6 +98,7 @@ rclone check source:path dest:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -88,14 +109,18 @@ rclone check source:path dest:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md
index fdbef818c..d9abf13be 100644
--- a/docs/content/commands/rclone_cleanup.md
+++ b/docs/content/commands/rclone_cleanup.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -30,12 +30,16 @@ rclone cleanup remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -44,6 +48,8 @@ rclone cleanup remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -57,12 +63,14 @@ rclone cleanup remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -75,6 +83,7 @@ rclone cleanup remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -85,14 +94,18 @@ rclone cleanup remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md
index 1531997be..5d165e861 100644
--- a/docs/content/commands/rclone_config.md
+++ b/docs/content/commands/rclone_config.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -27,12 +27,16 @@ rclone config
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -41,6 +45,8 @@ rclone config
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -54,12 +60,14 @@ rclone config
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -72,6 +80,7 @@ rclone config
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -82,14 +91,18 @@ rclone config
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index 87d482e13..d6dd4a721 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -66,12 +66,16 @@ rclone copy source:path dest:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -80,6 +84,8 @@ rclone copy source:path dest:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -93,12 +99,14 @@ rclone copy source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -111,6 +119,7 @@ rclone copy source:path dest:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -121,14 +130,18 @@ rclone copy source:path dest:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index 1f37ee13d..7bb8eb041 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@@ -53,12 +53,16 @@ rclone copyto source:path dest:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -67,6 +71,8 @@ rclone copyto source:path dest:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -80,12 +86,14 @@ rclone copyto source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -98,6 +106,7 @@ rclone copyto source:path dest:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -108,14 +117,18 @@ rclone copyto source:path dest:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md
index d0ede842e..00e44f80a 100644
--- a/docs/content/commands/rclone_cryptcheck.md
+++ b/docs/content/commands/rclone_cryptcheck.md
@@ -1,5 +1,5 @@
---
-date: 2017-02-20T16:37:25Z
+date: 2017-03-18T11:14:54Z
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
@@ -126,6 +126,6 @@ rclone cryptcheck remote:path cryptedremote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 20-Feb-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md
index fe7a5656c..d6bd82c12 100644
--- a/docs/content/commands/rclone_dedupe.md
+++ b/docs/content/commands/rclone_dedupe.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@@ -108,12 +108,16 @@ rclone dedupe [mode] remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -122,6 +126,8 @@ rclone dedupe [mode] remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -135,12 +141,14 @@ rclone dedupe [mode] remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -153,6 +161,7 @@ rclone dedupe [mode] remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -163,14 +172,18 @@ rclone dedupe [mode] remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md
index 2545c2eee..0d1993b1e 100644
--- a/docs/content/commands/rclone_delete.md
+++ b/docs/content/commands/rclone_delete.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -44,12 +44,16 @@ rclone delete remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -58,6 +62,8 @@ rclone delete remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -71,12 +77,14 @@ rclone delete remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -89,6 +97,7 @@ rclone delete remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -99,14 +108,18 @@ rclone delete remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md
index a3942ad62..3d69b5fad 100644
--- a/docs/content/commands/rclone_genautocomplete.md
+++ b/docs/content/commands/rclone_genautocomplete.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -42,12 +42,16 @@ rclone genautocomplete [output_file]
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -56,6 +60,8 @@ rclone genautocomplete [output_file]
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -69,12 +75,14 @@ rclone genautocomplete [output_file]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -87,6 +95,7 @@ rclone genautocomplete [output_file]
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -97,14 +106,18 @@ rclone genautocomplete [output_file]
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md
index fb5238489..2c51a4ed7 100644
--- a/docs/content/commands/rclone_gendocs.md
+++ b/docs/content/commands/rclone_gendocs.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -30,12 +30,16 @@ rclone gendocs output_directory
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -44,6 +48,8 @@ rclone gendocs output_directory
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -57,12 +63,14 @@ rclone gendocs output_directory
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -75,6 +83,7 @@ rclone gendocs output_directory
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -85,14 +94,18 @@ rclone gendocs output_directory
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md
index 3744910b7..a8033bb6f 100644
--- a/docs/content/commands/rclone_listremotes.md
+++ b/docs/content/commands/rclone_listremotes.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -37,12 +37,16 @@ rclone listremotes
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -51,6 +55,8 @@ rclone listremotes
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -64,12 +70,14 @@ rclone listremotes
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -82,6 +90,7 @@ rclone listremotes
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -92,14 +101,18 @@ rclone listremotes
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index 5b567e420..cd96b1195 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@@ -27,12 +27,16 @@ rclone ls remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -41,6 +45,8 @@ rclone ls remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -54,12 +60,14 @@ rclone ls remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -72,6 +80,7 @@ rclone ls remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -82,14 +91,18 @@ rclone ls remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index 3aeea3f95..00c20e6ab 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -27,12 +27,16 @@ rclone lsd remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -41,6 +45,8 @@ rclone lsd remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -54,12 +60,14 @@ rclone lsd remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -72,6 +80,7 @@ rclone lsd remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -82,14 +91,18 @@ rclone lsd remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index b324609a5..a7ce8e703 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@@ -27,12 +27,16 @@ rclone lsl remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -41,6 +45,8 @@ rclone lsl remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -54,12 +60,14 @@ rclone lsl remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -72,6 +80,7 @@ rclone lsl remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -82,14 +91,18 @@ rclone lsl remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index 98b55ce99..0ddee1656 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@@ -30,12 +30,16 @@ rclone md5sum remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -44,6 +48,8 @@ rclone md5sum remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -57,12 +63,14 @@ rclone md5sum remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -75,6 +83,7 @@ rclone md5sum remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -85,14 +94,18 @@ rclone md5sum remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md
index 1d8a8f91b..3454063d3 100644
--- a/docs/content/commands/rclone_mkdir.md
+++ b/docs/content/commands/rclone_mkdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@@ -27,12 +27,16 @@ rclone mkdir remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -41,6 +45,8 @@ rclone mkdir remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -54,12 +60,14 @@ rclone mkdir remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -72,6 +80,7 @@ rclone mkdir remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -82,14 +91,18 @@ rclone mkdir remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index efabdf926..a2295c7e2 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@@ -19,7 +19,7 @@ This is **EXPERIMENTAL** - use with care.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
-Start the mount like this
+Start the mount like this (note the & on the end to put rclone in the background).
rclone mount remote:path/to/files /path/to/local/mount &
@@ -27,23 +27,27 @@ Stop the mount with
fusermount -u /path/to/local/mount
+Or if that fails try
+
+ fusermount -z -u /path/to/local/mount
+
Or with OS X
- umount -u /path/to/local/mount
+ umount /path/to/local/mount
### Limitations ###
This can only write files seqentially, it can only seek when reading.
+This means that many applications won't work with their files on an
+rclone mount.
-Rclone mount inherits rclone's directory handling. In rclone's world
-directories don't really exist. This means that empty directories
-will have a tendency to disappear once they fall out of the directory
-cache.
-
-The bucket based FSes (eg swift, s3, google compute storage, b2) won't
-work from the root - you will need to specify a bucket, or a path
-within the bucket. So `swift:` won't work whereas `swift:bucket` will
-as will `swift:bucket/path`.
+The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
+Hubic) won't work from the root - you will need to specify a bucket,
+or a path within the bucket. So `swift:` won't work whereas
+`swift:bucket` will as will `swift:bucket/path`.
+None of these support the concept of directories, so empty
+directories will have a tendency to disappear once they fall out of
+the directory cache.
Only supported on Linux, FreeBSD and OS X at the moment.
@@ -56,6 +60,11 @@ can't use retries in the same way without making local copies of the
uploads. This might happen in the future, but for the moment rclone
mount won't do that, so will be less reliable than the rclone command.
+### Filters ###
+
+Note that all the rclone filters can be used to select a subset of the
+files to be visible in the mount.
+
### Bugs ###
* All the remotes should work for read, but some may not for write
@@ -103,12 +112,16 @@ rclone mount remote:path /path/to/mountpoint
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -117,6 +130,8 @@ rclone mount remote:path /path/to/mountpoint
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -130,12 +145,14 @@ rclone mount remote:path /path/to/mountpoint
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -148,6 +165,7 @@ rclone mount remote:path /path/to/mountpoint
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -158,14 +176,18 @@ rclone mount remote:path /path/to/mountpoint
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index db440f829..43ff68400 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@@ -44,12 +44,16 @@ rclone move source:path dest:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -58,6 +62,8 @@ rclone move source:path dest:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -71,12 +77,14 @@ rclone move source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -89,6 +97,7 @@ rclone move source:path dest:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -99,14 +108,18 @@ rclone move source:path dest:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index f8456bd6f..d32d4884e 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/
@@ -56,12 +56,16 @@ rclone moveto source:path dest:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -70,6 +74,8 @@ rclone moveto source:path dest:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -83,12 +89,14 @@ rclone moveto source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -101,6 +109,7 @@ rclone moveto source:path dest:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -111,14 +120,18 @@ rclone moveto source:path dest:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md
index e6c2de429..9d37d59da 100644
--- a/docs/content/commands/rclone_obscure.md
+++ b/docs/content/commands/rclone_obscure.md
@@ -1,5 +1,5 @@
---
-date: 2017-02-20T16:37:25Z
+date: 2017-03-18T11:14:54Z
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/
@@ -103,6 +103,6 @@ rclone obscure password
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 20-Feb-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index 08c44708f..6a1aab9ef 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@@ -31,12 +31,16 @@ rclone purge remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -45,6 +49,8 @@ rclone purge remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -58,12 +64,14 @@ rclone purge remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -76,6 +84,7 @@ rclone purge remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -86,14 +95,18 @@ rclone purge remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md
index 76aa7ad6f..7444d0671 100644
--- a/docs/content/commands/rclone_rmdir.md
+++ b/docs/content/commands/rclone_rmdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@@ -29,12 +29,16 @@ rclone rmdir remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -43,6 +47,8 @@ rclone rmdir remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -56,12 +62,14 @@ rclone rmdir remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -74,6 +82,7 @@ rclone rmdir remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -84,14 +93,18 @@ rclone rmdir remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md
index 2cceb364c..883f660e6 100644
--- a/docs/content/commands/rclone_rmdirs.md
+++ b/docs/content/commands/rclone_rmdirs.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
@@ -34,12 +34,16 @@ rclone rmdirs remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -48,6 +52,8 @@ rclone rmdirs remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -61,12 +67,14 @@ rclone rmdirs remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -79,6 +87,7 @@ rclone rmdirs remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -89,14 +98,18 @@ rclone rmdirs remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index 7e2bfc9ea..4bcc9ce30 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@@ -30,12 +30,16 @@ rclone sha1sum remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -44,6 +48,8 @@ rclone sha1sum remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -57,12 +63,14 @@ rclone sha1sum remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -75,6 +83,7 @@ rclone sha1sum remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -85,14 +94,18 @@ rclone sha1sum remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md
index fa766a540..9825a2380 100644
--- a/docs/content/commands/rclone_size.md
+++ b/docs/content/commands/rclone_size.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@@ -27,12 +27,16 @@ rclone size remote:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -41,6 +45,8 @@ rclone size remote:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -54,12 +60,14 @@ rclone size remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -72,6 +80,7 @@ rclone size remote:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -82,14 +91,18 @@ rclone size remote:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index b46d6111d..f3f1473ef 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@@ -46,12 +46,16 @@ rclone sync source:path dest:path
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -60,6 +64,8 @@ rclone sync source:path dest:path
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -73,12 +79,14 @@ rclone sync source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -91,6 +99,7 @@ rclone sync source:path dest:path
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -101,14 +110,18 @@ rclone sync source:path dest:path
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md
index fc80f19ee..2c0c306a0 100644
--- a/docs/content/commands/rclone_version.md
+++ b/docs/content/commands/rclone_version.md
@@ -1,5 +1,5 @@
---
-date: 2017-01-02T15:29:14Z
+date: 2017-03-18T11:14:54Z
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@@ -27,12 +27,16 @@ rclone version
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
- --bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
@@ -41,6 +45,8 @@ rclone version
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -54,12 +60,14 @@ rclone version
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -72,6 +80,7 @@ rclone version
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
@@ -82,14 +91,18 @@ rclone version
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- -v, --verbose Print lots more stuff
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.35-DEV
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
-###### Auto generated by spf13/cobra on 2-Jan-2017
+###### Auto generated by spf13/cobra on 18-Mar-2017
diff --git a/docs/content/downloads.md b/docs/content/downloads.md
index 1547d4ed0..070049801 100644
--- a/docs/content/downloads.md
+++ b/docs/content/downloads.md
@@ -2,41 +2,43 @@
title: "Rclone downloads"
description: "Download rclone binaries for your OS."
type: page
-date: "2017-01-02"
+date: "2017-03-18"
---
-Rclone Download v1.35
+Rclone Download v1.36
=====================
* Windows
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-windows-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-windows-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.36-windows-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-windows-amd64.zip)
* OSX
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-osx-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-osx-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.36-osx-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-osx-amd64.zip)
* Linux
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-linux-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-linux-amd64.zip)
- * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.35-linux-arm.zip)
- * [ARM - 64 Bit](http://downloads.rclone.org/rclone-v1.35-linux-arm64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.36-linux-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-linux-amd64.zip)
+ * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.36-linux-arm.zip)
+ * [ARM - 64 Bit](http://downloads.rclone.org/rclone-v1.36-linux-arm64.zip)
+ * [MIPS - Big Endian](http://downloads.rclone.org/rclone-v1.36-linux-mips.zip)
+ * [MIPS - Little Endian](http://downloads.rclone.org/rclone-v1.36-linux-mipsle.zip)
* FreeBSD
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-freebsd-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-freebsd-amd64.zip)
- * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.35-freebsd-arm.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.36-freebsd-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-freebsd-amd64.zip)
+ * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.36-freebsd-arm.zip)
* NetBSD
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-netbsd-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-netbsd-amd64.zip)
- * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.35-netbsd-arm.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.36-netbsd-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-netbsd-amd64.zip)
+ * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.36-netbsd-arm.zip)
* OpenBSD
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-openbsd-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-openbsd-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.36-openbsd-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-openbsd-amd64.zip)
* Plan 9
- * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.35-plan9-386.zip)
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-plan9-amd64.zip)
+ * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.36-plan9-386.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-plan9-amd64.zip)
* Solaris
- * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.35-solaris-amd64.zip)
+ * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.36-solaris-amd64.zip)
-You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.35).
+You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.36).
You can also download [the releases using SSL](https://downloads-rclone-org-7d7d567e.cdn.memsites.com/).
diff --git a/fs/version.go b/fs/version.go
index 87503aaf3..e81480b30 100644
--- a/fs/version.go
+++ b/fs/version.go
@@ -1,4 +1,4 @@
package fs
// Version of rclone
-var Version = "v1.35-DEV"
+var Version = "v1.36"
diff --git a/rclone.1 b/rclone.1
index 9c8b3e111..ad2c21c02 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 1.16.0.2
.\"
-.TH "rclone" "1" "Jan 02, 2017" "User Manual" ""
+.TH "rclone" "1" "Mar 18, 2017" "User Manual" ""
.hy
.SH Rclone
.PP
@@ -30,6 +30,8 @@ Backblaze B2
.IP \[bu] 2
Yandex Disk
.IP \[bu] 2
+SFTP
+.IP \[bu] 2
The local filesystem
.PP
Features
@@ -100,9 +102,9 @@ Copy binary file
.IP
.nf
\f[C]
-sudo\ cp\ rclone\ /usr/sbin/
-sudo\ chown\ root:root\ /usr/sbin/rclone
-sudo\ chmod\ 755\ /usr/sbin/rclone
+sudo\ cp\ rclone\ /usr/bin/
+sudo\ chown\ root:root\ /usr/bin/rclone
+sudo\ chmod\ 755\ /usr/bin/rclone
\f[]
.fi
.PP
@@ -206,14 +208,92 @@ add the role to the hosts you want rclone installed to:
\ \ \ \ \ \ \ \ \ \ \-\ rclone
\f[]
.fi
+.SS Installation with snap
+.SS Quickstart
+.IP \[bu] 2
+install Snapd on your distro using the instructions below
+.IP \[bu] 2
+sudo snap install rclone \-\-classic
+.IP \[bu] 2
+Run \f[C]rclone\ config\f[] to setup.
+See rclone config docs (http://rclone.org/docs/) for more details.
+.PP
+See below for how to install snapd if it isn\[aq]t already installed
+.SS Arch
+.IP
+.nf
+\f[C]
+sudo\ pacman\ \-S\ snapd
+\f[]
+.fi
+.PP
+enable the snapd systemd service:
+.IP
+.nf
+\f[C]
+sudo\ systemctl\ enable\ \-\-now\ snapd.socket
+\f[]
+.fi
+.SS Debian / Ubuntu
+.IP
+.nf
+\f[C]
+sudo\ apt\ install\ snapd
+\f[]
+.fi
+.SS Fedora
+.IP
+.nf
+\f[C]
+sudo\ dnf\ copr\ enable\ zyga/snapcore
+sudo\ dnf\ install\ snapd
+\f[]
+.fi
+.PP
+enable the snapd systemd service:
+.IP
+.nf
+\f[C]
+sudo\ systemctl\ enable\ \-\-now\ snapd.service
+\f[]
+.fi
+.PP
+SELinux support is in beta, so currently:
+.IP
+.nf
+\f[C]
+sudo\ setenforce\ 0
+\f[]
+.fi
+.PP
+to persist, edit \f[C]/etc/selinux/config\f[] to set
+\f[C]SELINUX=permissive\f[] and reboot.
+.SS Gentoo
+.PP
+Install the gentoo\-snappy
+overlay (https://github.com/zyga/gentoo-snappy).
+.SS OpenEmbedded/Yocto
+.PP
+Install the snap meta
+layer (https://github.com/morphis/meta-snappy/blob/master/README.md).
+.SS openSUSE
+.IP
+.nf
+\f[C]
+sudo\ zypper\ addrepo\ http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/\ snappy
+sudo\ zypper\ install\ snapd
+\f[]
+.fi
+.SS OpenWrt
+.PP
+Enable the snap\-openwrt feed.
.SS Configure
.PP
First you\[aq]ll need to configure rclone.
As the object storage systems have quite complicated authentication
-these are kept in a config file \f[C]\&.rclone.conf\f[] in your home
-directory by default.
-(You can use the \f[C]\-\-config\f[] option to choose a different config
-file.)
+these are kept in a config file.
+(See the \f[C]\-\-config\f[] entry for how to find the config file and
+choose its location.)
.PP
The easiest way to make the config is to run rclone with the config
option:
@@ -249,6 +329,8 @@ Microsoft One Drive (http://rclone.org/onedrive/)
.IP \[bu] 2
Yandex Disk (http://rclone.org/yandex/)
.IP \[bu] 2
+SFTP (http://rclone.org/sftp/)
+.IP \[bu] 2
Crypt (http://rclone.org/crypt/) \- to encrypt other remotes
.SS Usage
.PP
@@ -498,18 +580,31 @@ Checks the files in the source and destination match.
.SS Synopsis
.PP
Checks the files in the source and destination match.
-It compares sizes and MD5SUMs and prints a report of files which
-don\[aq]t match.
+It compares sizes and hashes (MD5 or SHA1) and logs a report of files
+which don\[aq]t match.
It doesn\[aq]t alter the source or destination.
.PP
-\f[C]\-\-size\-only\f[] may be used to only compare the sizes, not the
-MD5SUMs.
+If you supply the \-\-size\-only flag, it will only compare the sizes
+not the hashes as well.
+Use this for a quick check.
+.PP
+If you supply the \-\-download flag, it will download the data from both
+remotes and check them against each other on the fly.
+This can be useful for remotes that don\[aq]t support hashes or if you
+really want to check all the data.
.IP
.nf
\f[C]
rclone\ check\ source:path\ dest:path
\f[]
.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \ \ \ \ \-\-download\ \ \ Check\ by\ downloading\ rather\ than\ with\ hash.
+\f[]
+.fi
.SS rclone ls
.PP
List all the objects in the path with size and path.
@@ -780,12 +875,29 @@ Or like this to output any .txt files in dir or subdirectories.
rclone\ \-\-include\ "*.txt"\ cat\ remote:path/to/dir
\f[]
.fi
+.PP
+Use the \-\-head flag to print characters only at the start, \-\-tail
+for the end and \-\-offset and \-\-count to print a section in the
+middle.
+Note that if offset is negative it will count from the end, so
+\-\-offset \-1 \-\-count 1 is equivalent to \-\-tail 1.
.IP
.nf
\f[C]
rclone\ cat\ remote:path
\f[]
.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \ \ \ \ \-\-count\ int\ \ \ \ Only\ print\ N\ characters.\ (default\ \-1)
+\ \ \ \ \ \ \-\-discard\ \ \ \ \ \ Discard\ the\ output\ instead\ of\ printing.
+\ \ \ \ \ \ \-\-head\ int\ \ \ \ \ Only\ print\ the\ first\ N\ characters.
+\ \ \ \ \ \ \-\-offset\ int\ \ \ Start\ printing\ at\ offset\ N\ (or\ from\ end\ if\ \-ve).
+\ \ \ \ \ \ \-\-tail\ int\ \ \ \ \ Only\ print\ the\ last\ N\ characters.
+\f[]
+.fi
.SS rclone copyto
.PP
Copy files from source to dest, skipping already copied
@@ -830,6 +942,47 @@ It doesn\[aq]t delete files from the destination.
rclone\ copyto\ source:path\ dest:path
\f[]
.fi
+.SS rclone cryptcheck
+.PP
+Cryptcheck checks the integritity of a crypted remote.
+.SS Synopsis
+.PP
+rclone cryptcheck checks a remote against a crypted remote.
+This is the equivalent of running rclone check, but able to check the
+checksums of the crypted remote.
+.PP
+For it to work the underlying remote of the cryptedremote must support
+some kind of checksum.
+.PP
+It works by reading the nonce from each file on the cryptedremote: and
+using that to encrypt each file on the remote:.
+It then checks the checksum of the underlying file on the cryptedremote:
+against the checksum of the file it has just encrypted.
+.PP
+Use it like this
+.IP
+.nf
+\f[C]
+rclone\ cryptcheck\ /path/to/files\ encryptedremote:path
+\f[]
+.fi
+.PP
+You can use it like this also, but that will involve downloading all the
+files in remote:path.
+.IP
+.nf
+\f[C]
+rclone\ cryptcheck\ remote:path\ encryptedremote:path
+\f[]
+.fi
+.PP
+After it has run it will log the status of the encryptedremote:.
+.IP
+.nf
+\f[C]
+rclone\ cryptcheck\ remote:path\ cryptedremote:path
+\f[]
+.fi
.SS rclone genautocomplete
.PP
Output bash completion script for rclone.
@@ -912,7 +1065,8 @@ This is \f[B]EXPERIMENTAL\f[] \- use with care.
First set up your remote using \f[C]rclone\ config\f[].
Check it works with \f[C]rclone\ ls\f[] etc.
.PP
-Start the mount like this
+Start the mount like this (note the & on the end to put rclone in the
+background).
.IP
.nf
\f[C]
@@ -928,27 +1082,35 @@ fusermount\ \-u\ /path/to/local/mount
\f[]
.fi
.PP
+Or if that fails try
+.IP
+.nf
+\f[C]
+fusermount\ \-z\ \-u\ /path/to/local/mount
+\f[]
+.fi
+.PP
Or with OS X
.IP
.nf
\f[C]
-umount\ \-u\ /path/to/local/mount
+umount\ /path/to/local/mount
\f[]
.fi
.SS Limitations
.PP
This can only write files seqentially, it can only seek when reading.
+This means that many applications won\[aq]t work with their files on an
+rclone mount.
.PP
-Rclone mount inherits rclone\[aq]s directory handling.
-In rclone\[aq]s world directories don\[aq]t really exist.
-This means that empty directories will have a tendency to disappear once
-they fall out of the directory cache.
-.PP
-The bucket based FSes (eg swift, s3, google compute storage, b2)
-won\[aq]t work from the root \- you will need to specify a bucket, or a
-path within the bucket.
+The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
+Hubic) won\[aq]t work from the root \- you will need to specify a
+bucket, or a path within the bucket.
So \f[C]swift:\f[] won\[aq]t work whereas \f[C]swift:bucket\f[] will as
will \f[C]swift:bucket/path\f[].
+None of these support the concept of directories, so empty directories
+will have a tendency to disappear once they fall out of the directory
+cache.
.PP
Only supported on Linux, FreeBSD and OS X at the moment.
.SS rclone mount vs rclone sync/copy
@@ -960,6 +1122,10 @@ However rclone mount can\[aq]t use retries in the same way without
making local copies of the uploads.
This might happen in the future, but for the moment rclone mount
won\[aq]t do that, so will be less reliable than the rclone command.
+.SS Filters
+.PP
+Note that all the rclone filters can be used to select a subset of the
+files to be visible in the mount.
.SS Bugs
.IP \[bu] 2
All the remotes should work for read, but some may not for write
@@ -1051,6 +1217,18 @@ src will be deleted on successful transfer.
rclone\ moveto\ source:path\ dest:path
\f[]
.fi
+.SS rclone obscure
+.PP
+Obscure password for use in the rclone.conf
+.SS Synopsis
+.PP
+Obscure password for use in the rclone.conf
+.IP
+.nf
+\f[C]
+rclone\ obscure\ password
+\f[]
+.fi
.SS rclone rmdirs
.PP
Remove any empty directoryies under the path.
@@ -1160,8 +1338,8 @@ rclone\ copy\ E:\\\ remote:backup
.fi
.SS Server Side Copy
.PP
-Drive, S3, Dropbox, Swift and Google Cloud Storage support server side
-copy.
+Most remotes (but not all \- see the
+overview (/overview/#optional-features)) support server side copy.
.PP
This means if you want to copy one folder to another then rclone
won\[aq]t download all the files and re\-upload them; it will instruct
@@ -1178,11 +1356,13 @@ rclone\ copy\ s3:oldbucket\ s3:newbucket
Will copy the contents of \f[C]oldbucket\f[] to \f[C]newbucket\f[]
without downloading and re\-uploading.
.PP
-Remotes which don\[aq]t support server side copy (eg local)
-\f[B]will\f[] download and re\-upload in this case.
+Remotes which don\[aq]t support server side copy \f[B]will\f[] download
+and re\-upload in this case.
.PP
Server side copies are used with \f[C]sync\f[] and \f[C]copy\f[] and
will be identified in the log when using the \f[C]\-v\f[] flag.
+The may also be used with \f[C]move\f[] if the remote doesn\[aq]t
+support server side move.
.PP
Server side copies will only be attempted if the remote names are the
same.
@@ -1209,16 +1389,71 @@ Options which use SIZE use kByte by default.
However a suffix of \f[C]b\f[] for bytes, \f[C]k\f[] for kBytes,
\f[C]M\f[] for MBytes and \f[C]G\f[] for GBytes may be used.
These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
-.SS \-\-bwlimit=SIZE
+.SS \-\-backup\-dir=DIR
.PP
-Bandwidth limit in kBytes/s, or use suffix b|k|M|G.
+When using \f[C]sync\f[], \f[C]copy\f[] or \f[C]move\f[] any files which
+would have been overwritten or deleted are moved in their original
+hierarchy into this directory.
+.PP
+If \f[C]\-\-suffix\f[] is set, then the moved files will have the suffix
+added to them.
+If there is a file with the same path (after the suffix has been added)
+in DIR, then it will be overwritten.
+.PP
+The remote in use must support server side move or copy and you must use
+the same remote as the destination of the sync.
+The backup directory must not overlap the destination directory.
+.PP
+For example
+.IP
+.nf
+\f[C]
+rclone\ sync\ /path/to/local\ remote:current\ \-\-backup\-dir\ remote:old
+\f[]
+.fi
+.PP
+will sync \f[C]/path/to/local\f[] to \f[C]remote:current\f[], but for
+any files which would have been updated or deleted will be stored in
+\f[C]remote:old\f[].
+.PP
+If running rclone from a script you might want to use today\[aq]s date
+as the directory name passed to \f[C]\-\-backup\-dir\f[] to store the
+old files, or you might want to pass \f[C]\-\-suffix\f[] with
+today\[aq]s date.
+.SS \-\-bwlimit=BANDWIDTH_SPEC
+.PP
+This option controls the bandwidth limit.
+Limits can be specified in two ways: As a single limit, or as a
+timetable.
+.PP
+Single limits last for the duration of the session.
+To use a single limit, specify the desired bandwidth in kBytes/s, or use
+a suffix b|k|M|G.
The default is \f[C]0\f[] which means to not limit bandwidth.
.PP
For example to limit bandwidth usage to 10 MBytes/s use
\f[C]\-\-bwlimit\ 10M\f[]
.PP
-This only limits the bandwidth of the data transfer, it doesn\[aq]t
-limit the bandwith of the directory listings etc.
+It is also possible to specify a "timetable" of limits, which will cause
+certain limits to be applied at certain times.
+To specify a timetable, format your entries as "HH:MM,BANDWIDTH
+HH:MM,BANDWITH...".
+.PP
+An example of a typical timetable to avoid link saturation during
+daytime working hours could be:
+.PP
+\f[C]\-\-bwlimit\ "08:00,512\ 12:00,10M\ 13:00,512\ 18:00,30M\ 23:00,off"\f[]
+.PP
+In this example, the transfer bandwidth will be set to 512kBytes/sec at
+8am.
+At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at
+1pm.
+At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it
+will be completely disabled (full speed).
+Anything between 11pm and 8am will remain unlimited.
+.PP
+Bandwidth limits only apply to the data transfer.
+The don\[aq]t apply to the bandwith of the directory listings etc.
.PP
Note that the units are Bytes/s not Bits/s.
Typically connections are measured in Bits/s \- to convert divide by 8.
@@ -1226,6 +1461,12 @@ For example let\[aq]s say you have a 10 Mbit/s connection and you wish
rclone to use half of it \- 5 Mbit/s.
This is 5/8 = 0.625MByte/s so you would use a
\f[C]\-\-bwlimit\ 0.625M\f[] parameter for rclone.
+.SS \-\-buffer\-size=SIZE
+.PP
+Use this sized buffer to speed up file transfers.
+Each \f[C]\-\-transfer\f[] will use this much memory for buffering.
+.PP
+Set to 0 to disable the buffering for the minimum memory use.
.SS \-\-checkers=N
.PP
The number of checkers to run in parallel.
@@ -1257,11 +1498,17 @@ they are incorrect as it would normally.
.SS \-\-config=CONFIG_FILE
.PP
Specify the location of the rclone config file.
-Normally this is in your home directory as a file called
-\f[C]\&.rclone.conf\f[].
+.PP
+Normally the config file is in your home directory as a file called
+\f[C]\&.config/rclone/rclone.conf\f[] (or \f[C]\&.rclone.conf\f[] if
+created with an older version).
+If \f[C]$XDG_CONFIG_HOME\f[] is set it will be at
+\f[C]$XDG_CONFIG_HOME/rclone/rclone.conf\f[]
+.PP
If you run \f[C]rclone\ \-h\f[] and look at the help for the
\f[C]\-\-config\f[] option you will see where the default location is
for you.
+.PP
Use this flag to override the config location, eg
\f[C]rclone\ \-\-config=".myconfig"\ .config\f[].
.SS \-\-contimeout=TIME
@@ -1287,6 +1534,15 @@ Do a trial run with no permanent changes.
Use this to see what rclone would do without actually doing it.
Useful when setting up the \f[C]sync\f[] command which deletes files in
the destination.
+.SS \-\-ignore\-checksum
+.PP
+Normally rclone will check that the checksums of transferred files
+match, and give an error "corrupted on transfer" if they don\[aq]t.
+.PP
+You can use this option to skip that check.
+You should only use it if you have had the "corrupted on transfer" error
+message and you are sure you might want to transfer potentially
+corrupted data.
.SS \-\-ignore\-existing
.PP
Using this option will make rclone unconditionally skip all files that
@@ -1324,6 +1580,26 @@ This is not active by default.
This can be useful for tracking down problems with syncs in combination
with the \f[C]\-v\f[] flag.
See the Logging section for more info.
+.SS \-\-log\-level LEVEL
+.PP
+This sets the log level for rclone.
+The default log level is \f[C]INFO\f[].
+.PP
+\f[C]DEBUG\f[] is equivalent to \f[C]\-vv\f[].
+It outputs lots of debug info \- useful for bug reports and really
+finding out what rclone is doing.
+.PP
+\f[C]INFO\f[] is equivalent to \f[C]\-v\f[].
+It outputs information about each transfer and prints stats once a
+minute by default.
+.PP
+\f[C]NOTICE\f[] is the default log level if no logging flags are
+supplied.
+It outputs very little when things are working normally.
+It outputs warnings and significant events.
+.PP
+\f[C]ERROR\f[] is equivalent to \f[C]\-q\f[].
+It only output error messages.
.SS \-\-low\-level\-retries NUMBER
.PP
This controls the number of low level retries rclone does.
@@ -1436,6 +1712,51 @@ The rate is reported as a binary unit, not SI unit.
So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
.PP
The default is \f[C]bytes\f[].
+.SS \-\-suffix=SUFFIX
+.PP
+This is for use with \f[C]\-\-backup\-dir\f[] only.
+If this isn\[aq]t set then \f[C]\-\-backup\-dir\f[] will move files with
+their original name.
+If it is set then the files will have SUFFIX added on to them.
+.PP
+See \f[C]\-\-backup\-dir\f[] for more info.
+.SS \-\-syslog
+.PP
+On capable OSes (not Windows or Plan9) send all log output to syslog.
+.PP
+This can be useful for running rclone in script or
+\f[C]rclone\ mount\f[].
+.SS \-\-syslog\-facility string
+.PP
+If using \f[C]\-\-syslog\f[] this sets the syslog facility (eg
+\f[C]KERN\f[], \f[C]USER\f[]).
+See \f[C]man\ syslog\f[] for a list of possible facilities.
+The default facility is \f[C]DAEMON\f[].
+.SS \-\-track\-renames
+.PP
+By default rclone doesn\[aq]t not keep track of renamed files, so if you
+rename a file locally then sync it to a remote, rclone will delete the
+old file on the remote and upload a new copy.
+.PP
+If you use this flag, and the remote supports server side copy or server
+side move, and the source and destination have a compatible hash, then
+this will track renames during \f[C]sync\f[], \f[C]copy\f[], and
+\f[C]move\f[] operations and perform renaming server\-side.
+.PP
+Files will be matched by size and hash \- if both match then a rename
+will be considered.
+.PP
+If the destination does not support server\-side copy or move, rclone
+will fall back to the default behaviour and log an error level message
+to the console.
+.PP
+Note that \f[C]\-\-track\-renames\f[] is incompatible with
+\f[C]\-\-no\-traverse\f[] and that it uses extra memory to keep track of
+all the rename candidates.
+.PP
+Note also that \f[C]\-\-track\-renames\f[] is incompatible with
+\f[C]\-\-delete\-before\f[] and will select \f[C]\-\-delete\-after\f[]
+instead of \f[C]\-\-delete\-during\f[].
.SS \-\-delete\-(before,during,after)
.PP
This option allows you to specify when files on your destination are
@@ -1444,17 +1765,24 @@ deleted when you sync folders.
Specifying the value \f[C]\-\-delete\-before\f[] will delete all files
present on the destination, but not on the source \f[I]before\f[]
starting the transfer of any new or updated files.
-This uses extra memory as it has to store the source listing before
-proceeding.
+This uses two passes through the file systems, one for the deletions and
+one for the copies.
.PP
-Specifying \f[C]\-\-delete\-during\f[] (default value) will delete files
-while checking and uploading files.
-This is usually the fastest option.
-Currently this works the same as \f[C]\-\-delete\-after\f[] but it may
-change in the future.
+Specifying \f[C]\-\-delete\-during\f[] will delete files while checking
+and uploading files.
+This is the fastest option and uses the least memory.
.PP
-Specifying \f[C]\-\-delete\-after\f[] will delay deletion of files until
-all new/updated files have been successfully transfered.
+Specifying \f[C]\-\-delete\-after\f[] (the default value) will delay
+deletion of files until all new/updated files have been successfully
+transfered.
+The files to be deleted are collected in the copy pass then deleted
+after the copy pass has completed sucessfully.
+The files to be deleted are held in memory so this mode may use more
+memory.
+This is the safest mode as it will only delete files if there have been
+no errors subsequent to that.
+If there have been errors before the deletions start then you will get
+the message \f[C]not\ deleting\ files\ as\ there\ were\ IO\ errors\f[].
.SS \-\-timeout=TIME
.PP
This sets the IO idle timeout.
@@ -1490,12 +1818,14 @@ This can be useful when transferring to a remote which doesn\[aq]t
support mod times directly as it is more accurate than a
\f[C]\-\-size\-only\f[] check and faster than using
\f[C]\-\-checksum\f[].
-.SS \-v, \-\-verbose
+.SS \-v, \-vv, \-\-verbose
.PP
-If you set this flag, rclone will become very verbose telling you about
+With \f[C]\-v\f[] rclone will tell you about each file that is
+transferred and a small number of significant events.
+.PP
+With \f[C]\-vv\f[] rclone will become very verbose telling you about
every file it considers and transfers.
-.PP
-Very useful for debugging.
+Please send bug reports with a log with this setting.
.SS \-V, \-\-version
.PP
Prints the version number
@@ -1655,6 +1985,8 @@ This option defaults to \f[C]false\f[].
The \f[C]\-\-no\-traverse\f[] flag controls whether the destination file
system is traversed when using the \f[C]copy\f[] or \f[C]move\f[]
commands.
+\f[C]\-\-no\-traverse\f[] is not compatible with \f[C]sync\f[] and will
+be ignored if you supply it with \f[C]sync\f[].
.PP
If you are only copying a small number of files and/or have a large
number of files on the destination then \f[C]\-\-no\-traverse\f[] will
@@ -1702,30 +2034,42 @@ For the filtering options
See the filtering section (http://rclone.org/filtering/).
.SS Logging
.PP
-rclone has 3 levels of logging, \f[C]Error\f[], \f[C]Info\f[] and
-\f[C]Debug\f[].
+rclone has 4 levels of logging, \f[C]Error\f[], \f[C]Notice\f[],
+\f[C]Info\f[] and \f[C]Debug\f[].
.PP
-By default rclone logs \f[C]Error\f[] and \f[C]Info\f[] to standard
-error and \f[C]Debug\f[] to standard output.
-This means you can redirect standard output and standard error to
-different places.
+By default rclone logs to standard error.
+This means you can redirect standard error and still see the normal
+output of rclone commands (eg \f[C]rclone\ ls\f[]).
.PP
-By default rclone will produce \f[C]Error\f[] and \f[C]Info\f[] level
+By default rclone will produce \f[C]Error\f[] and \f[C]Notice\f[] level
messages.
.PP
If you use the \f[C]\-q\f[] flag, rclone will only produce
\f[C]Error\f[] messages.
.PP
If you use the \f[C]\-v\f[] flag, rclone will produce \f[C]Error\f[],
-\f[C]Info\f[] and \f[C]Debug\f[] messages.
+\f[C]Notice\f[] and \f[C]Info\f[] messages.
+.PP
+If you use the \f[C]\-vv\f[] flag, rclone will produce \f[C]Error\f[],
+\f[C]Notice\f[], \f[C]Info\f[] and \f[C]Debug\f[] messages.
+.PP
+You can also control the log levels with the \f[C]\-\-log\-level\f[]
+flag.
.PP
If you use the \f[C]\-\-log\-file=FILE\f[] option, rclone will redirect
\f[C]Error\f[], \f[C]Info\f[] and \f[C]Debug\f[] messages along with
standard error to FILE.
+.PP
+If you use the \f[C]\-\-syslog\f[] flag then rclone will log to syslog
+and the \f[C]\-\-syslog\-facility\f[] control which facility it uses.
+.PP
+Rclone prefixes all log messages with their level in capitals, eg INFO
+which makes it easy to grep the log file for different kinds of
+information.
.SS Exit Code
.PP
-If any errors occurred during the command, rclone with an exit code of
-\f[C]1\f[].
+If any errors occurred during the command, rclone will exit with a
+non\-zero exit code.
This allows scripts to detect when rclone operations have failed.
.PP
During the startup phase rclone will exit immediately if an error is
@@ -1733,8 +2077,8 @@ detected in the configuration.
There will always be a log message immediately before exiting.
.PP
When rclone is running it will accumulate errors as it goes along, and
-only exit with an non\-zero exit code if (after retries) there were no
-transfers with errors remaining.
+only exit with an non\-zero exit code if (after retries) there were
+still failed transfers.
For every error counted there will be a high priority log message
(visibile with \f[C]\-q\f[]) showing the message and which file caused
the problem.
@@ -1743,6 +2087,74 @@ can see that any previous error messages may not be valid after the
retry.
If rclone has done a retry it will log a high priority message if the
retry was successful.
+.SS Environment Variables
+.PP
+Rclone can be configured entirely using environment variables.
+These can be used to set defaults for options or config file entries.
+.SS Options
+.PP
+Every option in rclone can have its default set by environment variable.
+.PP
+To find the name of the environment variable, first take the long option
+name, strip the leading \f[C]\-\-\f[], change \f[C]\-\f[] to \f[C]_\f[],
+make upper case and prepend \f[C]RCLONE_\f[].
+.PP
+For example to always set \f[C]\-\-stats\ 5s\f[], set the environment
+variable \f[C]RCLONE_STATS=5s\f[].
+If you set stats on the command line this will override the environment
+variable setting.
+.PP
+Or to always use the trash in drive \f[C]\-\-drive\-use\-trash\f[], set
+\f[C]RCLONE_DRIVE_USE_TRASH=true\f[].
+.PP
+The same parser is used for the options and the environment variables so
+they take exactly the same form.
+.SS Config file
+.PP
+You can set defaults for values in the config file on an individual
+remote basis.
+If you want to use this feature, you will need to discover the name of
+the config items that you want.
+The easiest way is to run through \f[C]rclone\ config\f[] by hand, then
+look in the config file to see what the values are (the config file can
+be found by looking at the help for \f[C]\-\-config\f[] in
+\f[C]rclone\ help\f[]).
+.PP
+To find the name of the environment variable, you need to set, take
+\f[C]RCLONE_\f[] + name of remote + \f[C]_\f[] + name of config file
+option and make it all uppercase.
+.PP
+For example to configure an S3 remote named \f[C]mys3:\f[] without a
+config file (using unix ways of setting environment variables):
+.IP
+.nf
+\f[C]
+$\ export\ RCLONE_CONFIG_MYS3_TYPE=s3
+$\ export\ RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
+$\ export\ RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
+$\ rclone\ lsd\ MYS3:
+\ \ \ \ \ \ \ \ \ \ \-1\ 2016\-09\-21\ 12:54:21\ \ \ \ \ \ \ \ \-1\ my\-bucket
+$\ rclone\ listremotes\ |\ grep\ mys3
+mys3:
+\f[]
+.fi
+.PP
+Note that if you want to create a remote using environment variables you
+must create the \f[C]\&..._TYPE\f[] variable as above.
+.SS Other environment variables
+.IP \[bu] 2
+RCLONE_CONFIG_PASS` set to contain your config file password (see
+Configuration Encryption (#configuration-encryption) section)
+.IP \[bu] 2
+HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions
+thereof).
+.RS 2
+.IP \[bu] 2
+HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
+.IP \[bu] 2
+The environment values may be either a complete URL or a "host[:port]"
+for, in which case the "http" scheme is assumed.
+.RE
.SH Configuring rclone on a remote / headless machine
.PP
Some of the configurations (those involving oauth2) require an Internet
@@ -2521,6 +2933,19 @@ T}@T{
R/W
T}
T{
+SFTP
+T}@T{
+\-
+T}@T{
+Yes
+T}@T{
+Depends
+T}@T{
+No
+T}@T{
+\-
+T}
+T{
The local filesystem
T}@T{
All
@@ -2569,7 +2994,8 @@ and a case sensitive system.
The symptom of this is that no matter how many times you run the sync it
never completes fully.
.PP
-The local filesystem may or may not be case sensitive depending on OS.
+The local filesystem and SFTP may or may not be case sensitive depending
+on OS.
.IP \[bu] 2
Windows \- usually case insensitive, though case is preserved
.IP \[bu] 2
@@ -2714,7 +3140,7 @@ Yes
T}@T{
Yes
T}@T{
-No #197 (https://github.com/ncw/rclone/issues/197)
+Yes
T}@T{
No #197 (https://github.com/ncw/rclone/issues/197)
T}@T{
@@ -2760,6 +3186,19 @@ T}@T{
No #575 (https://github.com/ncw/rclone/issues/575)
T}
T{
+SFTP
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}
+T{
The local filesystem
T}@T{
Yes
@@ -2849,31 +3288,35 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
-Storage>\ 6
+Storage>\ 7
Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
-client_id>\
+client_id>
Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
-client_secret>\
+client_secret>
Remote\ config
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
@@ -2887,8 +3330,8 @@ Waiting\ for\ code...
Got\ code
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-client_id\ =\
-client_secret\ =\
+client_id\ =
+client_secret\ =
token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null}
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
@@ -3155,6 +3598,10 @@ T}@T{
A ZIP file of HTML, Images CSS
T}
.TE
+.SS \-\-drive\-skip\-gdocs
+.PP
+Skip google documents in all listings.
+If given, gdocs practically become invisible to rclone.
.SS Limitations
.PP
Drive has quite a lot of rate limiting.
@@ -3231,25 +3678,29 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
Storage>\ 2
Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
@@ -3290,21 +3741,27 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ /\ Asia\ Pacific\ (Tokyo)\ Region
\ 8\ |\ Needs\ location\ constraint\ ap\-northeast\-1.
\ \ \ \\\ "ap\-northeast\-1"
+\ \ \ /\ Asia\ Pacific\ (Seoul)
+\ 9\ |\ Needs\ location\ constraint\ ap\-northeast\-2.
+\ \ \ \\\ "ap\-northeast\-2"
+\ \ \ /\ Asia\ Pacific\ (Mumbai)
+10\ |\ Needs\ location\ constraint\ ap\-south\-1.
+\ \ \ \\\ "ap\-south\-1"
\ \ \ /\ South\ America\ (Sao\ Paulo)\ Region
-\ 9\ |\ Needs\ location\ constraint\ sa\-east\-1.
+11\ |\ Needs\ location\ constraint\ sa\-east\-1.
\ \ \ \\\ "sa\-east\-1"
\ \ \ /\ If\ using\ an\ S3\ clone\ that\ only\ understands\ v2\ signatures
-10\ |\ eg\ Ceph/Dreamhost
+12\ |\ eg\ Ceph/Dreamhost
\ \ \ |\ set\ this\ and\ make\ sure\ you\ set\ the\ endpoint.
\ \ \ \\\ "other\-v2\-signature"
\ \ \ /\ If\ using\ an\ S3\ clone\ that\ understands\ v4\ signatures\ set\ this
-11\ |\ and\ make\ sure\ you\ set\ the\ endpoint.
+13\ |\ and\ make\ sure\ you\ set\ the\ endpoint.
\ \ \ \\\ "other\-v4\-signature"
region>\ 1
Endpoint\ for\ S3\ API.
Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region.
Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph.
-endpoint>\
+endpoint>
Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest.
@@ -3323,7 +3780,11 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "ap\-southeast\-2"
\ 8\ /\ Asia\ Pacific\ (Tokyo)\ Region.
\ \ \ \\\ "ap\-northeast\-1"
-\ 9\ /\ South\ America\ (Sao\ Paulo)\ Region.
+\ 9\ /\ Asia\ Pacific\ (Seoul)
+\ \ \ \\\ "ap\-northeast\-2"
+10\ /\ Asia\ Pacific\ (Mumbai)
+\ \ \ \\\ "ap\-south\-1"
+11\ /\ South\ America\ (Sao\ Paulo)\ Region.
\ \ \ \\\ "sa\-east\-1"
location_constraint>\ 1
Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3.
@@ -3370,8 +3831,11 @@ env_auth\ =\ false
access_key_id\ =\ access_key
secret_access_key\ =\ secret_key
region\ =\ us\-east\-1
-endpoint\ =\
-location_constraint\ =\
+endpoint\ =
+location_constraint\ =
+acl\ =\ private
+server_side_encryption\ =
+storage_class\ =
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -3614,7 +4078,7 @@ access_key_id>\ WLGDGYAQYIGI833EV05A
secret_access_key>\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF\ \ \
region>\ us\-east\-1
endpoint>\ http://10.0.0.3:9000
-location_constraint>\
+location_constraint>
server_side_encryption>
\f[]
.fi
@@ -3629,8 +4093,8 @@ access_key_id\ =\ WLGDGYAQYIGI833EV05A
secret_access_key\ =\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
region\ =\ us\-east\-1
endpoint\ =\ http://10.0.0.3:9000
-location_constraint\ =\
-server_side_encryption\ =\
+location_constraint\ =
+server_side_encryption\ =
\f[]
.fi
.PP
@@ -3683,27 +4147,31 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
-Storage>\ 10
+Storage>\ 11
User\ name\ to\ log\ in.
user>\ user_name
API\ key\ or\ password.
@@ -3725,25 +4193,28 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
auth>\ 1
User\ domain\ \-\ optional\ (v3\ auth)
domain>\ Default
-Tenant\ name\ \-\ optional
-tenant>\
+Tenant\ name\ \-\ optional\ for\ v1\ auth,\ required\ otherwise
+tenant>\ tenant_name
Tenant\ domain\ \-\ optional\ (v3\ auth)
tenant_domain>
Region\ name\ \-\ optional
-region>\
+region>
Storage\ URL\ \-\ optional
-storage_url>\
-Remote\ config
+storage_url>
AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version
-auth_version>\
+auth_version>
+Remote\ config
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
user\ =\ user_name
key\ =\ password_or_api_key
auth\ =\ https://auth.api.rackspacecloud.com/v1.0
-tenant\ =\
-region\ =\
-storage_url\ =\
+domain\ =\ Default
+tenant\ =
+tenant_domain\ =
+region\ =
+storage_url\ =
+auth_version\ =
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -3892,39 +4363,43 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
Storage>\ 4
Dropbox\ App\ Key\ \-\ leave\ blank\ normally.
-app_key>\
+app_key>
Dropbox\ App\ Secret\ \-\ leave\ blank\ normally.
-app_secret>\
+app_secret>
Remote\ config
Please\ visit:
https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
Enter\ the\ code:\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-app_key\ =\
-app_secret\ =\
+app_key\ =
+app_secret\ =
token\ =\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
@@ -4034,65 +4509,68 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
-Storage>\ 5
+Storage>\ 6
Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
-client_id>\
+client_id>
Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
-client_secret>\
+client_secret>
Project\ number\ optional\ \-\ needed\ only\ for\ list/create/delete\ buckets\ \-\ see\ your\ developer\ console.
project_number>\ 12345678
Service\ Account\ Credentials\ JSON\ file\ path\ \-\ needed\ only\ if\ you\ want\ use\ SA\ instead\ of\ interactive\ login.
-service_account_file>\
+service_account_file>
Access\ Control\ List\ for\ new\ objects.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
-\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access.
-\ 1)\ authenticatedRead
-\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ OWNER\ access.
-\ 2)\ bucketOwnerFullControl
-\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ READER\ access.
-\ 3)\ bucketOwnerRead
-\ *\ Object\ owner\ gets\ OWNER\ access\ [default\ if\ left\ blank].
-\ 4)\ private
-\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ members\ get\ access\ according\ to\ their\ roles.
-\ 5)\ projectPrivate
-\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access.
-\ 6)\ publicRead
+\ 1\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access.
+\ \ \ \\\ "authenticatedRead"
+\ 2\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ OWNER\ access.
+\ \ \ \\\ "bucketOwnerFullControl"
+\ 3\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ READER\ access.
+\ \ \ \\\ "bucketOwnerRead"
+\ 4\ /\ Object\ owner\ gets\ OWNER\ access\ [default\ if\ left\ blank].
+\ \ \ \\\ "private"
+\ 5\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ members\ get\ access\ according\ to\ their\ roles.
+\ \ \ \\\ "projectPrivate"
+\ 6\ /\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access.
+\ \ \ \\\ "publicRead"
object_acl>\ 4
Access\ Control\ List\ for\ new\ buckets.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
-\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access.
-\ 1)\ authenticatedRead
-\ *\ Project\ team\ owners\ get\ OWNER\ access\ [default\ if\ left\ blank].
-\ 2)\ private
-\ *\ Project\ team\ members\ get\ access\ according\ to\ their\ roles.
-\ 3)\ projectPrivate
-\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access.
-\ 4)\ publicRead
-\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ WRITER\ access.
-\ 5)\ publicReadWrite
+\ 1\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access.
+\ \ \ \\\ "authenticatedRead"
+\ 2\ /\ Project\ team\ owners\ get\ OWNER\ access\ [default\ if\ left\ blank].
+\ \ \ \\\ "private"
+\ 3\ /\ Project\ team\ members\ get\ access\ according\ to\ their\ roles.
+\ \ \ \\\ "projectPrivate"
+\ 4\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access.
+\ \ \ \\\ "publicRead"
+\ 5\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ WRITER\ access.
+\ \ \ \\\ "publicReadWrite"
bucket_acl>\ 2
Remote\ config
-Remote\ config
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine\ or\ Y\ didn\[aq]t\ work
@@ -4106,8 +4584,8 @@ Got\ code
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
type\ =\ google\ cloud\ storage
-client_id\ =\
-client_secret\ =\
+client_id\ =
+client_secret\ =
token\ =\ {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014\-07\-17T20:49:14.929208288+01:00","Extra":null}
project_number\ =\ 12345678
object_acl\ =\ private
@@ -4225,40 +4703,50 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
Storage>\ 1
Amazon\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
-client_id>\
+client_id>
Amazon\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
-client_secret>\
+client_secret>
Remote\ config
+Use\ auto\ config?
+\ *\ Say\ Y\ if\ not\ sure
+\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine
+y)\ Yes
+n)\ No
+y/n>\ y
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
Log\ in\ and\ authorize\ rclone\ for\ access
Waiting\ for\ code...
Got\ code
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-client_id\ =\
-client_secret\ =\
+client_id\ =
+client_secret\ =
token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015\-09\-06T16:07:39.658438471+01:00"}
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
@@ -4316,6 +4804,8 @@ Any files you delete with rclone will end up in the trash.
Amazon don\[aq]t provide an API to permanently delete files, nor to
empty the trash, so you will have to do that with one of Amazon\[aq]s
apps or via the Amazon Drive website.
+As of November 17, 2016, files are automatically deleted by Amazon from
+the trash after 30 days.
.SS Using with non \f[C]\&.com\f[] Amazon accounts
.PP
Let\[aq]s say you usually use \f[C]amazon.co.uk\f[].
@@ -4382,14 +4872,14 @@ To avoid this problem, use \f[C]\-\-max\-size\ 50000M\f[] option to
limit the maximum size of uploaded files.
Note that \f[C]\-\-max\-size\f[] does not split files into segments, it
only ignores files over this size.
-.SS Microsoft One Drive
+.SS Microsoft OneDrive
.PP
Paths are specified as \f[C]remote:path\f[]
.PP
Paths may be as deep as required, eg
\f[C]remote:directory/subdirectory\f[].
.PP
-The initial setup for One Drive involves getting a token from Microsoft
+The initial setup for OneDrive involves getting a token from Microsoft
which you need to do in your browser.
\f[C]rclone\ config\f[] walks you through it.
.PP
@@ -4415,31 +4905,35 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
-Storage>\ 9
+Storage>\ 10
Microsoft\ App\ Client\ Id\ \-\ leave\ blank\ normally.
-client_id>\
+client_id>
Microsoft\ App\ Client\ Secret\ \-\ leave\ blank\ normally.
-client_secret>\
+client_secret>
Remote\ config
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
@@ -4453,8 +4947,8 @@ Waiting\ for\ code...
Got\ code
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-client_id\ =\
-client_secret\ =\
+client_id\ =
+client_secret\ =
token\ =\ {"access_token":"XXXXXX"}
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
@@ -4476,7 +4970,7 @@ to unblock it temporarily if you are running a host firewall.
.PP
Once configured you can then use \f[C]rclone\f[] like this,
.PP
-List directories in top level of your One Drive
+List directories in top level of your OneDrive
.IP
.nf
\f[C]
@@ -4484,7 +4978,7 @@ rclone\ lsd\ remote:
\f[]
.fi
.PP
-List all the files in your One Drive
+List all the files in your OneDrive
.IP
.nf
\f[C]
@@ -4492,7 +4986,7 @@ rclone\ ls\ remote:
\f[]
.fi
.PP
-To copy a local directory to an One Drive directory called backup
+To copy a local directory to an OneDrive directory called backup
.IP
.nf
\f[C]
@@ -4501,7 +4995,7 @@ rclone\ copy\ /home/source\ remote:backup
.fi
.SS Modified time and hashes
.PP
-One Drive allows modification times to be set on objects accurate to 1
+OneDrive allows modification times to be set on objects accurate to 1
second.
These will be used to detect whether objects need syncing or not.
.PP
@@ -4512,7 +5006,7 @@ One drive supports SHA1 type hashes, so you can use
Any files you delete with rclone will end up in the trash.
Microsoft doesn\[aq]t provide an API to permanently delete files, nor to
empty the trash, so you will have to do that with one of Microsoft\[aq]s
-apps or via the One Drive website.
+apps or via the OneDrive website.
.SS Specific options
.PP
Here are the command line options specific to this cloud storage system.
@@ -4527,14 +5021,14 @@ Cutoff for switching to chunked upload \- must be <= 100MB.
The default is 10MB.
.SS Limitations
.PP
-Note that One Drive is case insensitive so you can\[aq]t have a file
+Note that OneDrive is case insensitive so you can\[aq]t have a file
called "Hello.doc" and one called "hello.doc".
.PP
-Rclone only supports your default One Drive, and doesn\[aq]t work with
+Rclone only supports your default OneDrive, and doesn\[aq]t work with
One Drive for business.
Both these issues may be fixed at some point depending on user demand!
.PP
-There are quite a few characters that can\[aq]t be in One Drive file
+There are quite a few characters that can\[aq]t be in OneDrive file
names.
These can\[aq]t occur on Windows platforms, but on non\-Windows
platforms they are common.
@@ -4577,31 +5071,35 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
-Storage>\ 7
+Storage>\ 8
Hubic\ Client\ Id\ \-\ leave\ blank\ normally.
-client_id>\
+client_id>
Hubic\ Client\ Secret\ \-\ leave\ blank\ normally.
-client_secret>\
+client_secret>
Remote\ config
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
@@ -4615,8 +5113,8 @@ Waiting\ for\ code...
Got\ code
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-client_id\ =\
-client_secret\ =\
+client_id\ =
+client_secret\ =
token\ =\ {"access_token":"XXXXXX"}
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
@@ -4723,25 +5221,29 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
Storage>\ 3
Account\ ID
@@ -4749,13 +5251,13 @@ account>\ 123456789abc
Application\ Key
key>\ 0123456789abcdef0123456789abcdef0123456789
Endpoint\ for\ the\ service\ \-\ leave\ blank\ normally.
-endpoint>\
+endpoint>
Remote\ config
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
account\ =\ 123456789abc
key\ =\ 0123456789abcdef0123456789abcdef0123456789
-endpoint\ =\
+endpoint\ =
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -5049,31 +5551,35 @@ Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
\ 3\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
\ 4\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 6\ /\ Google\ Drive
+\ 7\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 7\ /\ Hubic
+\ 8\ /\ Hubic
\ \ \ \\\ "hubic"
-\ 8\ /\ Local\ Disk
+\ 9\ /\ Local\ Disk
\ \ \ \\\ "local"
-\ 9\ /\ Microsoft\ OneDrive
+10\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-11\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
-Storage>\ 11
+Storage>\ 13
Yandex\ Client\ Id\ \-\ leave\ blank\ normally.
-client_id>\
+client_id>
Yandex\ Client\ Secret\ \-\ leave\ blank\ normally.
-client_secret>\
+client_secret>
Remote\ config
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
@@ -5087,8 +5593,8 @@ Waiting\ for\ code...
Got\ code
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-client_id\ =\
-client_secret\ =\
+client_id\ =
+client_secret\ =
token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016\-12\-29T12:27:11.362788025Z"}
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
@@ -5150,6 +5656,151 @@ format.
.SS MD5 checksums
.PP
MD5 checksums are natively supported by Yandex Disk.
+.SS SFTP
+.PP
+SFTP is the Secure (or SSH) File Transfer
+Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
+.PP
+It runs over SSH v2 and is standard with most modern SSH installations.
+.PP
+Paths are specified as \f[C]remote:path\f[].
+If the path does not begin with a \f[C]/\f[] it is relative to the home
+directory of the user.
+An empty path \f[C]remote:\f[] refers to the users home directory.
+.PP
+Here is an example of making a SFTP configuration.
+First run
+.IP
+.nf
+\f[C]
+rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process.
+You will need your account number (a short hex number) and key (a long
+hex number) which you can get from the SFTP control panel.
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/r/c/s/q>\ n
+name>\ remote
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Amazon\ Drive
+\ \ \ \\\ "amazon\ cloud\ drive"
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
+\ \ \ \\\ "s3"
+\ 3\ /\ Backblaze\ B2
+\ \ \ \\\ "b2"
+\ 4\ /\ Dropbox
+\ \ \ \\\ "dropbox"
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ \ \ \\\ "google\ cloud\ storage"
+\ 7\ /\ Google\ Drive
+\ \ \ \\\ "drive"
+\ 8\ /\ Hubic
+\ \ \ \\\ "hubic"
+\ 9\ /\ Local\ Disk
+\ \ \ \\\ "local"
+10\ /\ Microsoft\ OneDrive
+\ \ \ \\\ "onedrive"
+11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+\ \ \ \\\ "swift"
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
+\ \ \ \\\ "yandex"
+Storage>\ 12\ \
+SSH\ host\ to\ connect\ to
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Connect\ to\ example.com
+\ \ \ \\\ "example.com"
+host>\ example.com
+SSH\ username,\ leave\ blank\ for\ current\ username,\ ncw
+user>\
+SSH\ port
+port>\
+SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent
+y)\ Yes\ type\ in\ my\ own\ password
+g)\ Generate\ random\ password
+n)\ No\ leave\ this\ optional\ password\ blank
+y/g/n>\ n
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[remote]
+host\ =\ example.com
+user\ =\
+port\ =\
+pass\ =\
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+\f[]
+.fi
+.PP
+This remote is called \f[C]remote\f[] and can now be used like this
+.PP
+See all directories in the home directory
+.IP
+.nf
+\f[C]
+rclone\ lsd\ remote:
+\f[]
+.fi
+.PP
+Make a new directory
+.IP
+.nf
+\f[C]
+rclone\ mkdir\ remote:path/to/directory
+\f[]
+.fi
+.PP
+List the contents of a directory
+.IP
+.nf
+\f[C]
+rclone\ ls\ remote:path/to/directory
+\f[]
+.fi
+.PP
+Sync \f[C]/home/local/directory\f[] to the remote directory, deleting
+any excess files in the directory.
+.IP
+.nf
+\f[C]
+rclone\ sync\ /home/local/directory\ remote:directory
+\f[]
+.fi
+.SS Modified time
+.PP
+Modified times are stored on the server to 1 second precision.
+.PP
+Modified times are used in syncing and are fully supported.
+.SS Limitations
+.PP
+SFTP does not support any checksums.
+.PP
+SFTP isn\[aq]t supported under plan9 until this
+issue (https://github.com/pkg/sftp/issues/156) is fixed.
+.PP
+Note that since SFTP isn\[aq]t HTTP based the following flags don\[aq]t
+work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[],
+\f[C]\-\-dump\-auth\f[]
+.PP
+Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but
+\f[C]\-\-contimeout\f[] is).
.SS Crypt
.PP
The \f[C]crypt\f[] remote encrypts and decrypts another remote.
@@ -5207,12 +5858,14 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "onedrive"
11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-12\ /\ Yandex\ Disk
+12\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
Storage>\ 5
Remote\ to\ encrypt/decrypt.
Normally\ should\ contain\ a\ \[aq]:\[aq]\ and\ a\ path,\ eg\ "myremote:path/to/dir",
-"myremote:bucket"\ or\ "myremote:"
+"myremote:bucket"\ or\ maybe\ "myremote:"\ (not\ recommended).
remote>\ remote:path
How\ to\ encrypt\ the\ filenames.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
@@ -5269,8 +5922,11 @@ Note that if you reconfigure rclone with the same passwords/passphrases
elsewhere it will be compatible \- all the secrets used are derived from
those two passwords/passphrases.
.PP
-Note that rclone does not encrypt * file length \- this can be calcuated
-within 16 bytes * modification time \- used for syncing
+Note that rclone does not encrypt
+.IP \[bu] 2
+file length \- this can be calcuated within 16 bytes
+.IP \[bu] 2
+modification time \- used for syncing
.SS Specifying the remote
.PP
In normal use, make sure the remote has a \f[C]:\f[] in.
@@ -5368,14 +6024,27 @@ $\ rclone\ \-q\ ls\ remote:path
.PP
Here are some of the features of the file name encryption modes
.PP
-Off * doesn\[aq]t hide file names or directory structure * allows for
-longer file names (~246 characters) * can use sub paths and copy single
-files
+Off
+.IP \[bu] 2
+doesn\[aq]t hide file names or directory structure
+.IP \[bu] 2
+allows for longer file names (~246 characters)
+.IP \[bu] 2
+can use sub paths and copy single files
.PP
-Standard * file names encrypted * file names can\[aq]t be as long (~156
-characters) * can use sub paths and copy single files * directory
-structure visibile * identical files names will have identical uploaded
-names * can use shortcuts to shorten the directory recursion
+Standard
+.IP \[bu] 2
+file names encrypted
+.IP \[bu] 2
+file names can\[aq]t be as long (~156 characters)
+.IP \[bu] 2
+can use sub paths and copy single files
+.IP \[bu] 2
+directory structure visibile
+.IP \[bu] 2
+identical files names will have identical uploaded names
+.IP \[bu] 2
+can use shortcuts to shorten the directory recursion
.PP
Cloud storage systems have various limits on file name length and total
path length which you are more likely to hit using "Standard" file name
@@ -5393,6 +6062,58 @@ depends on that.
Hashes are not stored for crypt.
However the data integrity is protected by an extremely strong crypto
authenticator.
+.PP
+Note that you should use the \f[C]rclone\ cryptcheck\f[] command to
+check the integrity of a crypted remote instead of
+\f[C]rclone\ check\f[] which can\[aq]t check the checksums properly.
+.SS Specific options
+.PP
+Here are the command line options specific to this cloud storage system.
+.SS \-\-crypt\-show\-mapping
+.PP
+If this flag is set then for each file that the remote is asked to list,
+it will log (at level INFO) a line stating the decrypted file name and
+the encrypted file name.
+.PP
+This is so you can work out which encrypted names are which decrypted
+names just in case you need to do something with the encrypted file
+names, or for debugging purposes.
+.SS Backing up a crypted remote
+.PP
+If you wish to backup a crypted remote, it it recommended that you use
+\f[C]rclone\ sync\f[] on the encrypted files, and make sure the
+passwords are the same in the new encrypted remote.
+.PP
+This will have the following advantages
+.IP \[bu] 2
+\f[C]rclone\ sync\f[] will check the checksums while copying
+.IP \[bu] 2
+you can use \f[C]rclone\ check\f[] between the encrypted remotes
+.IP \[bu] 2
+you don\[aq]t decrypt and encrypt unecessarily
+.PP
+For example, let\[aq]s say you have your original remote at
+\f[C]remote:\f[] with the encrypted version at \f[C]eremote:\f[] with
+path \f[C]remote:crypt\f[].
+You would then set up the new remote \f[C]remote2:\f[] and then the
+encrypted version \f[C]eremote2:\f[] with path \f[C]remote2:crypt\f[]
+using the same passwords as \f[C]eremote:\f[].
+.PP
+To sync the two remotes you would do
+.IP
+.nf
+\f[C]
+rclone\ sync\ remote:crypt\ remote2:crypt
+\f[]
+.fi
+.PP
+And to check the integrity you would do
+.IP
+.nf
+\f[C]
+rclone\ check\ remote:crypt\ remote2:crypt
+\f[]
+.fi
.SS File formats
.SS File encryption
.PP
@@ -5583,6 +6304,52 @@ exceeds 258 characters on z, so only use this option if you have to.
.SS Specific options
.PP
Here are the command line options specific to local storage
+.SS \-\-copy\-links, \-L
+.PP
+Normally rclone will ignore symlinks or junction points (which behave
+like symlinks under Windows).
+.PP
+If you supply this flag then rclone will follow the symlink and copy the
+pointed to file or directory.
+.PP
+This flag applies to all commands.
+.PP
+For example, supposing you have a directory structure like this
+.IP
+.nf
+\f[C]
+$\ tree\ /tmp/a
+/tmp/a
+├──\ b\ \->\ ../b
+├──\ expected\ \->\ ../expected
+├──\ one
+└──\ two
+\ \ \ \ └──\ three
+\f[]
+.fi
+.PP
+Then you can see the difference with and without the flag like this
+.IP
+.nf
+\f[C]
+$\ rclone\ ls\ /tmp/a
+\ \ \ \ \ \ \ \ 6\ one
+\ \ \ \ \ \ \ \ 6\ two/three
+\f[]
+.fi
+.PP
+and
+.IP
+.nf
+\f[C]
+$\ rclone\ \-L\ ls\ /tmp/a
+\ \ \ \ \ 4174\ expected
+\ \ \ \ \ \ \ \ 6\ one
+\ \ \ \ \ \ \ \ 6\ two/three
+\ \ \ \ \ \ \ \ 6\ b/two
+\ \ \ \ \ \ \ \ 6\ b/one
+\f[]
+.fi
.SS \-\-one\-file\-system, \-x
.PP
This tells rclone to stay in the filesystem specified by the root and
@@ -5633,6 +6400,195 @@ On systems where it isn\[aq]t supported (eg Windows) it will not appear
as an valid flag.
.SS Changelog
.IP \[bu] 2
+v1.36 \- 2017\-03\-18
+.RS 2
+.IP \[bu] 2
+New Features
+.IP \[bu] 2
+SFTP remote (Jack Schmidt)
+.IP \[bu] 2
+Re\-implement sync routine to work a directory at a time reducing memory
+usage
+.IP \[bu] 2
+Logging revamped to be more inline with rsync \- now much quieter
+.RS 2
+.IP \[bu] 2
+\-v only shows transfers
+.IP \[bu] 2
+\-vv is for full debug
+.IP \[bu] 2
+\-\-syslog to log to syslog on capable platforms
+.RE
+.IP \[bu] 2
+Implement \-\-backup\-dir and \-\-suffix
+.IP \[bu] 2
+Implement \-\-track\-renames (initial implementation by Bjørn Erik
+Pedersen)
+.IP \[bu] 2
+Add time\-based bandwidth limits (Lukas Loesche)
+.IP \[bu] 2
+rclone cryptcheck: checks integrity of crypt remotes
+.IP \[bu] 2
+Allow all config file variables and options to be set from environment
+variables
+.IP \[bu] 2
+Add \-\-buffer\-size parameter to control buffer size for copy
+.IP \[bu] 2
+Make \-\-delete\-after the default
+.IP \[bu] 2
+Add \-\-ignore\-checksum flag (fixed by Hisham Zarka)
+.IP \[bu] 2
+rclone check: Add \-\-download flag to check all the data, not just
+hashes
+.IP \[bu] 2
+rclone cat: add \-\-head, \-\-tail, \-\-offset, \-\-count and
+\-\-discard
+.IP \[bu] 2
+rclone config: when choosing from a list, allow the value to be entered
+too
+.IP \[bu] 2
+rclone config: allow rename and copy of remotes
+.IP \[bu] 2
+rclone obscure: for generating encrypted passwords for rclone\[aq]s
+config (T.C.
+Ferguson)
+.IP \[bu] 2
+Comply with XDG Base Directory specification (Dario Giovannetti)
+.RS 2
+.IP \[bu] 2
+this moves the default location of the config file in a backwards
+compatible way
+.RE
+.IP \[bu] 2
+Release changes
+.RS 2
+.IP \[bu] 2
+Ubuntu snap support (Dedsec1)
+.IP \[bu] 2
+Compile with go 1.8
+.IP \[bu] 2
+MIPS/Linux big and little endian support
+.RE
+.IP \[bu] 2
+Bug Fixes
+.IP \[bu] 2
+Fix copyto copying things to the wrong place if the destination dir
+didn\[aq]t exist
+.IP \[bu] 2
+Fix parsing of remotes in moveto and copyto
+.IP \[bu] 2
+Fix \-\-delete\-before deleting files on copy
+.IP \[bu] 2
+Fix \-\-files\-from with an empty file copying everything
+.IP \[bu] 2
+Fix sync: don\[aq]t update mod times if \-\-dry\-run set
+.IP \[bu] 2
+Fix MimeType propagation
+.IP \[bu] 2
+Fix filters to add ** rules to directory rules
+.IP \[bu] 2
+Local
+.IP \[bu] 2
+Implement \-L, \-\-copy\-links flag to allow rclone to follow symlinks
+.IP \[bu] 2
+Open files in write only mode so rclone can write to an rclone mount
+.IP \[bu] 2
+Fix unnormalised unicode causing problems reading directories
+.IP \[bu] 2
+Fix interaction between \-x flag and \-\-max\-depth
+.IP \[bu] 2
+Mount
+.IP \[bu] 2
+Implement proper directory handling (mkdir, rmdir, renaming)
+.IP \[bu] 2
+Make include and exclude filters apply to mount
+.IP \[bu] 2
+Implement read and write async buffers \- control with \-\-buffer\-size
+.IP \[bu] 2
+Fix fsync on for directories
+.IP \[bu] 2
+Fix retry on network failure when reading off crypt
+.IP \[bu] 2
+Crypt
+.IP \[bu] 2
+Add \-\-crypt\-show\-mapping to show encrypted file mapping
+.IP \[bu] 2
+Fix crypt writer getting stuck in a loop
+.RS 2
+.IP \[bu] 2
+\f[B]IMPORTANT\f[] this bug had the potential to cause data corruption
+when
+.IP \[bu] 2
+reading data from a network based remote and
+.IP \[bu] 2
+writing to a crypt on Google Drive
+.IP \[bu] 2
+Use the cryptcheck command to validate your data if you are concerned
+.IP \[bu] 2
+If syncing two crypt remotes, sync the unencrypted remote
+.RE
+.IP \[bu] 2
+Amazon Drive
+.IP \[bu] 2
+Fix panics on Move (rename)
+.IP \[bu] 2
+Fix panic on token expiry
+.IP \[bu] 2
+B2
+.IP \[bu] 2
+Fix inconsistent listings and rclone check
+.IP \[bu] 2
+Fix uploading empty files with go1.8
+.IP \[bu] 2
+Constrain memory usage when doing multipart uploads
+.IP \[bu] 2
+Fix upload url not being refreshed properly
+.IP \[bu] 2
+Drive
+.IP \[bu] 2
+Fix Rmdir on directories with trashed files
+.IP \[bu] 2
+Fix "Ignoring unknown object" when downloading
+.IP \[bu] 2
+Add \-\-drive\-list\-chunk
+.IP \[bu] 2
+Add \-\-drive\-skip\-gdocs (Károly Oláh)
+.IP \[bu] 2
+OneDrive
+.IP \[bu] 2
+Implement Move
+.IP \[bu] 2
+Fix Copy
+.RS 2
+.IP \[bu] 2
+Fix overwrite detection in Copy
+.IP \[bu] 2
+Fix waitForJob to parse errors correctly
+.RE
+.IP \[bu] 2
+Use token renewer to stop auth errors on long uploads
+.IP \[bu] 2
+Fix uploading empty files with go1.8
+.IP \[bu] 2
+Google Cloud Storage
+.IP \[bu] 2
+Fix depth 1 directory listings
+.IP \[bu] 2
+Yandex
+.IP \[bu] 2
+Fix single level directory listing
+.IP \[bu] 2
+Dropbox
+.IP \[bu] 2
+Normalise the case for single level directory listings
+.IP \[bu] 2
+Fix depth 1 listing
+.IP \[bu] 2
+S3
+.IP \[bu] 2
+Added ca\-central\-1 region (Jon Yergatian)
+.RE
+.IP \[bu] 2
v1.35 \- 2017\-01\-02
.RS 2
.IP \[bu] 2
@@ -7166,6 +8122,29 @@ Alishan Ladhani
Thibault Molleman
.IP \[bu] 2
Scott McGillivray
+.IP \[bu] 2
+Bjørn Erik Pedersen
+.IP \[bu] 2
+Lukas Loesche
+.IP \[bu] 2
+emyarod
+.IP \[bu] 2
+T.C.
+Ferguson
+.IP \[bu] 2
+Brandur
+.IP \[bu] 2
+Dario Giovannetti
+.IP \[bu] 2
+Károly Oláh
+.IP \[bu] 2
+Jon Yergatian
+.IP \[bu] 2
+Jack Schmidt
+.IP \[bu] 2
+Dedsec1
+.IP \[bu] 2
+Hisham Zarka
.SH Contact the rclone project
.SS Forum
.PP
diff --git a/snapcraft.yaml b/snapcraft.yaml
index e11856ae0..959d608d3 100644
--- a/snapcraft.yaml
+++ b/snapcraft.yaml
@@ -1,5 +1,5 @@
name: rclone
-version: 1.35
+version: 1.36
summary: rsync for cloud storage
description:
Rclone is a command line program to sync files to and from cloud storage providers such as
@@ -16,7 +16,7 @@ parts:
rclone:
plugin: go
source: https://github.com/ncw/rclone
- source-tag: v1.35
+ source-tag: v1.36
source-type: git
go-importpath: github.com/ncw/rclone
build-packages: [gcc, libgudev-1.0-dev, fuse]