2016-02-07 21:26:10 +01:00
|
|
|
|
.\"t
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.\" Automatically generated by Pandoc 1.16.0.2
|
|
|
|
|
.\"
|
|
|
|
|
.TH "rclone" "1" "Jun 18, 2016" "User Manual" ""
|
|
|
|
|
.hy
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SH Rclone
|
|
|
|
|
.PP
|
|
|
|
|
[IMAGE: Logo (http://rclone.org/img/rclone-120x120.png)] (http://rclone.org/)
|
|
|
|
|
.PP
|
|
|
|
|
Rclone is a command line program to sync files and directories to and
|
|
|
|
|
from
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Amazon S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Openstack Swift / Rackspace cloud files / Memset Memstore
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Dropbox
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Cloud Storage
|
|
|
|
|
.IP \[bu] 2
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Microsoft One Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Hubic
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Backblaze B2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Yandex Disk
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
The local filesystem
|
|
|
|
|
.PP
|
|
|
|
|
Features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
MD5/SHA1 hashes checked at all times for file integrity
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Timestamps preserved on files
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Partial syncs supported on a whole file basis
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Copy mode to just copy new/changed files
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Sync (one way) mode to make a directory identical
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Check mode to check for file hash equality
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Can sync to and from network, eg two different cloud accounts
|
|
|
|
|
.PP
|
|
|
|
|
Links
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Home page (http://rclone.org/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Github project page for source and bug
|
|
|
|
|
tracker (http://github.com/ncw/rclone)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google+ page
|
|
|
|
|
.RS 2
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Downloads (http://rclone.org/downloads/)
|
|
|
|
|
.SS Install
|
|
|
|
|
.PP
|
|
|
|
|
Rclone is a Go program and comes as a single binary file.
|
|
|
|
|
.PP
|
|
|
|
|
Download (http://rclone.org/downloads/) the relevant binary.
|
|
|
|
|
.PP
|
2016-04-18 19:30:29 +02:00
|
|
|
|
Or alternatively if you have Go 1.5+ installed use
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
go\ get\ github.com/ncw/rclone
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
and this will build the binary in \f[C]$GOPATH/bin\f[].
|
|
|
|
|
If you have built rclone before then you will want to update its
|
2016-04-18 19:30:29 +02:00
|
|
|
|
dependencies first with this
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
2016-04-18 19:30:29 +02:00
|
|
|
|
go\ get\ \-u\ \-v\ github.com/ncw/rclone/...
|
2016-02-07 21:26:10 +01:00
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
See the Usage section (http://rclone.org/docs/) of the docs for how to
|
|
|
|
|
use rclone, or run \f[C]rclone\ \-h\f[].
|
|
|
|
|
.SS linux binary downloaded files install example
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
unzip\ rclone\-v1.17\-linux\-amd64.zip
|
|
|
|
|
cd\ rclone\-v1.17\-linux\-amd64
|
|
|
|
|
#copy\ binary\ file
|
|
|
|
|
sudo\ cp\ rclone\ /usr/sbin/
|
|
|
|
|
sudo\ chown\ root:root\ /usr/sbin/rclone
|
|
|
|
|
sudo\ chmod\ 755\ /usr/sbin/rclone
|
|
|
|
|
#install\ manpage
|
|
|
|
|
sudo\ mkdir\ \-p\ /usr/local/share/man/man1
|
|
|
|
|
sudo\ cp\ rclone.1\ /usr/local/share/man/man1/
|
|
|
|
|
sudo\ mandb
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Configure
|
|
|
|
|
.PP
|
|
|
|
|
First you\[aq]ll need to configure rclone.
|
|
|
|
|
As the object storage systems have quite complicated authentication
|
|
|
|
|
these are kept in a config file \f[C]\&.rclone.conf\f[] in your home
|
|
|
|
|
directory by default.
|
|
|
|
|
(You can use the \f[C]\-\-config\f[] option to choose a different config
|
|
|
|
|
file.)
|
|
|
|
|
.PP
|
|
|
|
|
The easiest way to make the config is to run rclone with the config
|
|
|
|
|
option:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
See the following for detailed instructions for
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google drive (http://rclone.org/drive/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Amazon S3 (http://rclone.org/s3/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift / Rackspace Cloudfiles / Memset
|
|
|
|
|
Memstore (http://rclone.org/swift/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Dropbox (http://rclone.org/dropbox/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Cloud Storage (http://rclone.org/googlecloudstorage/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Local filesystem (http://rclone.org/local/)
|
|
|
|
|
.IP \[bu] 2
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive (http://rclone.org/amazonclouddrive/)
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Backblaze B2 (http://rclone.org/b2/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Hubic (http://rclone.org/hubic/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Microsoft One Drive (http://rclone.org/onedrive/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Yandex Disk (http://rclone.org/yandex/)
|
|
|
|
|
.SS Usage
|
|
|
|
|
.PP
|
|
|
|
|
Rclone syncs a directory tree from one storage system to another.
|
|
|
|
|
.PP
|
|
|
|
|
Its syntax is like this
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
Syntax:\ [options]\ subcommand\ <parameters>\ <parameters...>
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Source and destination paths are specified by the name you gave the
|
|
|
|
|
storage system in the config file then the sub path, eg "drive:myfolder"
|
|
|
|
|
to look at "myfolder" in Google drive.
|
|
|
|
|
.PP
|
|
|
|
|
You can define as many storage paths as you like in the config file.
|
|
|
|
|
.SS Subcommands
|
|
|
|
|
.SS rclone copy source:path dest:path
|
|
|
|
|
.PP
|
|
|
|
|
Copy the source to the destination.
|
|
|
|
|
Doesn\[aq]t transfer unchanged files, testing by size and modification
|
|
|
|
|
time or MD5SUM.
|
|
|
|
|
Doesn\[aq]t delete files from the destination.
|
|
|
|
|
.PP
|
|
|
|
|
Note that it is always the contents of the directory that is synced, not
|
|
|
|
|
the directory so when source:path is a directory, it\[aq]s the contents
|
|
|
|
|
of source:path that are copied, not the directory name and contents.
|
|
|
|
|
.PP
|
|
|
|
|
If dest:path doesn\[aq]t exist, it is created and the source:path
|
|
|
|
|
contents go there.
|
|
|
|
|
.PP
|
|
|
|
|
For example
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ source:sourcepath\ dest:destpath
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Let\[aq]s say there are two files in sourcepath
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
sourcepath/one.txt
|
|
|
|
|
sourcepath/two.txt
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This copies them to
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
destpath/one.txt
|
|
|
|
|
destpath/two.txt
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Not to
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
destpath/sourcepath/one.txt
|
|
|
|
|
destpath/sourcepath/two.txt
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
If you are familiar with \f[C]rsync\f[], rclone always works as if you
|
|
|
|
|
had written a trailing / \- meaning "copy the contents of this
|
|
|
|
|
directory".
|
|
|
|
|
This applies to all commands and whether you are talking about the
|
|
|
|
|
source or destination.
|
|
|
|
|
.SS rclone sync source:path dest:path
|
|
|
|
|
.PP
|
|
|
|
|
Sync the source to the destination, changing the destination only.
|
|
|
|
|
Doesn\[aq]t transfer unchanged files, testing by size and modification
|
|
|
|
|
time or MD5SUM.
|
|
|
|
|
Destination is updated to match source, including deleting files if
|
|
|
|
|
necessary.
|
|
|
|
|
.PP
|
|
|
|
|
\f[B]Important\f[]: Since this can cause data loss, test first with the
|
|
|
|
|
\f[C]\-\-dry\-run\f[] flag to see exactly what would be copied and
|
|
|
|
|
deleted.
|
|
|
|
|
.PP
|
|
|
|
|
Note that files in the destination won\[aq]t be deleted if there were
|
|
|
|
|
any errors at any point.
|
|
|
|
|
.PP
|
|
|
|
|
It is always the contents of the directory that is synced, not the
|
|
|
|
|
directory so when source:path is a directory, it\[aq]s the contents of
|
|
|
|
|
source:path that are copied, not the directory name and contents.
|
|
|
|
|
See extended explanation in the \f[C]copy\f[] command above if unsure.
|
|
|
|
|
.PP
|
|
|
|
|
If dest:path doesn\[aq]t exist, it is created and the source:path
|
|
|
|
|
contents go there.
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.SS move source:path dest:path
|
|
|
|
|
.PP
|
|
|
|
|
Moves the source to the destination.
|
|
|
|
|
.PP
|
|
|
|
|
If there are no filters in use this is equivalent to a copy followed by
|
2016-06-18 17:29:53 +02:00
|
|
|
|
a purge, but may use server side operations to speed it up if possible.
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.PP
|
|
|
|
|
If filters are in use then it is equivalent to a copy followed by
|
|
|
|
|
delete, followed by an rmdir (which only removes the directory if
|
|
|
|
|
empty).
|
2016-06-18 17:29:53 +02:00
|
|
|
|
The individual file moves will be moved with server side operations if
|
2016-03-01 10:00:01 +01:00
|
|
|
|
possible.
|
|
|
|
|
.PP
|
|
|
|
|
\f[B]Important\f[]: Since this can cause data loss, test first with the
|
|
|
|
|
\-\-dry\-run flag.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS rclone ls remote:path
|
|
|
|
|
.PP
|
2016-06-18 17:29:53 +02:00
|
|
|
|
List all the objects in the path with size and path.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS rclone lsd remote:path
|
|
|
|
|
.PP
|
|
|
|
|
List all directories/containers/buckets in the the path.
|
|
|
|
|
.SS rclone lsl remote:path
|
|
|
|
|
.PP
|
|
|
|
|
List all the objects in the the path with modification time, size and
|
|
|
|
|
path.
|
|
|
|
|
.SS rclone md5sum remote:path
|
|
|
|
|
.PP
|
|
|
|
|
Produces an md5sum file for all the objects in the path.
|
|
|
|
|
This is in the same format as the standard md5sum tool produces.
|
|
|
|
|
.SS rclone sha1sum remote:path
|
|
|
|
|
.PP
|
|
|
|
|
Produces an sha1sum file for all the objects in the path.
|
|
|
|
|
This is in the same format as the standard sha1sum tool produces.
|
|
|
|
|
.SS rclone size remote:path
|
|
|
|
|
.PP
|
|
|
|
|
Prints the total size of objects in remote:path and the number of
|
|
|
|
|
objects.
|
|
|
|
|
.SS rclone mkdir remote:path
|
|
|
|
|
.PP
|
|
|
|
|
Make the path if it doesn\[aq]t already exist
|
|
|
|
|
.SS rclone rmdir remote:path
|
|
|
|
|
.PP
|
|
|
|
|
Remove the path.
|
|
|
|
|
Note that you can\[aq]t remove a path with objects in it, use purge for
|
|
|
|
|
that.
|
|
|
|
|
.SS rclone purge remote:path
|
|
|
|
|
.PP
|
|
|
|
|
Remove the path and all of its contents.
|
|
|
|
|
Note that this does not obey include/exclude filters \- everything will
|
|
|
|
|
be removed.
|
|
|
|
|
Use \f[C]delete\f[] if you want to selectively delete files.
|
|
|
|
|
.SS rclone delete remote:path
|
|
|
|
|
.PP
|
|
|
|
|
Remove the contents of path.
|
|
|
|
|
Unlike \f[C]purge\f[] it obeys include/exclude filters so can be used to
|
|
|
|
|
selectively delete files.
|
|
|
|
|
.PP
|
|
|
|
|
Eg delete all files bigger than 100MBytes
|
|
|
|
|
.PP
|
|
|
|
|
Check what would be deleted first (use either)
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ \-\-min\-size\ 100M\ lsl\ remote:path
|
|
|
|
|
rclone\ \-\-dry\-run\ \-\-min\-size\ 100M\ delete\ remote:path
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then delete
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ \-\-min\-size\ 100M\ delete\ remote:path
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
That reads "delete everything with a minimum size of 100 MB", hence
|
|
|
|
|
delete all files bigger than 100MBytes.
|
|
|
|
|
.SS rclone check source:path dest:path
|
|
|
|
|
.PP
|
|
|
|
|
Checks the files in the source and destination match.
|
|
|
|
|
It compares sizes and MD5SUMs and prints a report of files which
|
|
|
|
|
don\[aq]t match.
|
|
|
|
|
It doesn\[aq]t alter the source or destination.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.PP
|
|
|
|
|
\f[C]\-\-size\-only\f[] may be used to only compare the sizes, not the
|
|
|
|
|
MD5SUMs.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS rclone dedupe remote:path
|
|
|
|
|
.PP
|
2016-04-18 19:30:29 +02:00
|
|
|
|
By default \f[C]dedup\f[] interactively finds duplicate files and offers
|
|
|
|
|
to delete all but one or rename them to be different.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Only useful with Google Drive which can have duplicate file names.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.PP
|
|
|
|
|
The \f[C]dedupe\f[] command will delete all but one of any identical
|
|
|
|
|
(same md5sum) files it finds without confirmation.
|
|
|
|
|
This means that for most duplicated files the \f[C]dedupe\f[] command
|
|
|
|
|
will not be interactive.
|
|
|
|
|
You can use \f[C]\-\-dry\-run\f[] to see what would happen without doing
|
|
|
|
|
anything.
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example run.
|
|
|
|
|
.PP
|
|
|
|
|
Before \- with duplicates
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
$\ rclone\ lsl\ drive:dupes
|
|
|
|
|
\ \ 6048320\ 2016\-03\-05\ 16:23:16.798000000\ one.txt
|
|
|
|
|
\ \ 6048320\ 2016\-03\-05\ 16:23:11.775000000\ one.txt
|
|
|
|
|
\ \ \ 564374\ 2016\-03\-05\ 16:23:06.731000000\ one.txt
|
|
|
|
|
\ \ 6048320\ 2016\-03\-05\ 16:18:26.092000000\ one.txt
|
|
|
|
|
\ \ 6048320\ 2016\-03\-05\ 16:22:46.185000000\ two.txt
|
|
|
|
|
\ \ 1744073\ 2016\-03\-05\ 16:22:38.104000000\ two.txt
|
|
|
|
|
\ \ \ 564374\ 2016\-03\-05\ 16:22:52.118000000\ two.txt
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Now the \f[C]dedupe\f[] session
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
$\ rclone\ dedupe\ drive:dupes
|
2016-04-18 19:30:29 +02:00
|
|
|
|
2016/03/05\ 16:24:37\ Google\ drive\ root\ \[aq]dupes\[aq]:\ Looking\ for\ duplicates\ using\ interactive\ mode.
|
|
|
|
|
one.txt:\ Found\ 4\ duplicates\ \-\ deleting\ identical\ copies
|
|
|
|
|
one.txt:\ Deleting\ 2/3\ identical\ duplicates\ (md5sum\ "1eedaa9fe86fd4b8632e2ac549403b36")
|
|
|
|
|
one.txt:\ 2\ duplicates\ remain
|
|
|
|
|
\ \ 1:\ \ \ \ \ \ 6048320\ bytes,\ 2016\-03\-05\ 16:23:16.798000000,\ md5sum\ 1eedaa9fe86fd4b8632e2ac549403b36
|
|
|
|
|
\ \ 2:\ \ \ \ \ \ \ 564374\ bytes,\ 2016\-03\-05\ 16:23:06.731000000,\ md5sum\ 7594e7dc9fc28f727c42ee3e0749de81
|
|
|
|
|
s)\ Skip\ and\ do\ nothing
|
|
|
|
|
k)\ Keep\ just\ one\ (choose\ which\ in\ next\ step)
|
|
|
|
|
r)\ Rename\ all\ to\ be\ different\ (by\ changing\ file.jpg\ to\ file\-1.jpg)
|
|
|
|
|
s/k/r>\ k
|
|
|
|
|
Enter\ the\ number\ of\ the\ file\ to\ keep>\ 1
|
|
|
|
|
one.txt:\ Deleted\ 1\ extra\ copies
|
|
|
|
|
two.txt:\ Found\ 3\ duplicates\ \-\ deleting\ identical\ copies
|
|
|
|
|
two.txt:\ 3\ duplicates\ remain
|
|
|
|
|
\ \ 1:\ \ \ \ \ \ \ 564374\ bytes,\ 2016\-03\-05\ 16:22:52.118000000,\ md5sum\ 7594e7dc9fc28f727c42ee3e0749de81
|
|
|
|
|
\ \ 2:\ \ \ \ \ \ 6048320\ bytes,\ 2016\-03\-05\ 16:22:46.185000000,\ md5sum\ 1eedaa9fe86fd4b8632e2ac549403b36
|
|
|
|
|
\ \ 3:\ \ \ \ \ \ 1744073\ bytes,\ 2016\-03\-05\ 16:22:38.104000000,\ md5sum\ 851957f7fb6f0bc4ce76be966d336802
|
2016-02-07 21:26:10 +01:00
|
|
|
|
s)\ Skip\ and\ do\ nothing
|
|
|
|
|
k)\ Keep\ just\ one\ (choose\ which\ in\ next\ step)
|
|
|
|
|
r)\ Rename\ all\ to\ be\ different\ (by\ changing\ file.jpg\ to\ file\-1.jpg)
|
|
|
|
|
s/k/r>\ r
|
|
|
|
|
two\-1.txt:\ renamed\ from:\ two.txt
|
|
|
|
|
two\-2.txt:\ renamed\ from:\ two.txt
|
|
|
|
|
two\-3.txt:\ renamed\ from:\ two.txt
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
The result being
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
$\ rclone\ lsl\ drive:dupes
|
2016-04-18 19:30:29 +02:00
|
|
|
|
\ \ 6048320\ 2016\-03\-05\ 16:23:16.798000000\ one.txt
|
|
|
|
|
\ \ \ 564374\ 2016\-03\-05\ 16:22:52.118000000\ two\-1.txt
|
|
|
|
|
\ \ 6048320\ 2016\-03\-05\ 16:22:46.185000000\ two\-2.txt
|
|
|
|
|
\ \ 1744073\ 2016\-03\-05\ 16:22:38.104000000\ two\-3.txt
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Dedupe can be run non interactively using the \f[C]\-\-dedupe\-mode\f[]
|
|
|
|
|
flag.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ interactive\f[] \- interactive as above.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ skip\f[] \- removes identical files then skips
|
|
|
|
|
anything left.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ first\f[] \- removes identical files then keeps
|
|
|
|
|
the first one.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ newest\f[] \- removes identical files then keeps
|
|
|
|
|
the newest one.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ oldest\f[] \- removes identical files then keeps
|
|
|
|
|
the oldest one.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ rename\f[] \- removes identical files then
|
|
|
|
|
renames the rest to be different.
|
|
|
|
|
.PP
|
|
|
|
|
For example to rename all the identically named photos in your Google
|
|
|
|
|
Photos directory, do
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ dedupe\ \-\-dedupe\-mode\ rename\ "drive:Google\ Photos"
|
2016-02-07 21:26:10 +01:00
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS rclone config
|
|
|
|
|
.PP
|
|
|
|
|
Enter an interactive configuration session.
|
|
|
|
|
.SS rclone help
|
|
|
|
|
.PP
|
|
|
|
|
Prints help on rclone commands and options.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.SS Quoting and the shell
|
|
|
|
|
.PP
|
|
|
|
|
When you are typing commands to your computer you are using something
|
|
|
|
|
called the command line shell.
|
|
|
|
|
This interprets various characters in an OS specific way.
|
|
|
|
|
.PP
|
|
|
|
|
Here are some gotchas which may help users unfamiliar with the shell
|
|
|
|
|
rules
|
|
|
|
|
.SS Linux / OSX
|
|
|
|
|
.PP
|
|
|
|
|
If your names have spaces or shell metacharacters (eg \f[C]*\f[],
|
|
|
|
|
\f[C]?\f[], \f[C]$\f[], \f[C]\[aq]\f[], \f[C]"\f[] etc) then you must
|
|
|
|
|
quote them.
|
|
|
|
|
Use single quotes \f[C]\[aq]\f[] by default.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ \[aq]Important\ files?\[aq]\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
If you want to send a \f[C]\[aq]\f[] you will need to use \f[C]"\f[], eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ "O\[aq]Reilly\ Reviews"\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
The rules for quoting metacharacters are complicated and if you want the
|
|
|
|
|
full details you\[aq]ll have to consult the manual page for your shell.
|
|
|
|
|
.SS Windows
|
|
|
|
|
.PP
|
|
|
|
|
If your names have spaces in you need to put them in \f[C]"\f[], eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ "E:\\folder\ name\\folder\ name\\folder\ name"\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
If you are using the root directory on its own then don\[aq]t quote it
|
|
|
|
|
(see #464 (https://github.com/ncw/rclone/issues/464) for why), eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ E:\\\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Server Side Copy
|
|
|
|
|
.PP
|
|
|
|
|
Drive, S3, Dropbox, Swift and Google Cloud Storage support server side
|
|
|
|
|
copy.
|
|
|
|
|
.PP
|
|
|
|
|
This means if you want to copy one folder to another then rclone
|
|
|
|
|
won\[aq]t download all the files and re\-upload them; it will instruct
|
|
|
|
|
the server to copy them in place.
|
|
|
|
|
.PP
|
|
|
|
|
Eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ s3:oldbucket\ s3:newbucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Will copy the contents of \f[C]oldbucket\f[] to \f[C]newbucket\f[]
|
|
|
|
|
without downloading and re\-uploading.
|
|
|
|
|
.PP
|
|
|
|
|
Remotes which don\[aq]t support server side copy (eg local)
|
|
|
|
|
\f[B]will\f[] download and re\-upload in this case.
|
|
|
|
|
.PP
|
|
|
|
|
Server side copies are used with \f[C]sync\f[] and \f[C]copy\f[] and
|
|
|
|
|
will be identified in the log when using the \f[C]\-v\f[] flag.
|
|
|
|
|
.PP
|
|
|
|
|
Server side copies will only be attempted if the remote names are the
|
|
|
|
|
same.
|
|
|
|
|
.PP
|
|
|
|
|
This can be used when scripting to make aged backups efficiently, eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ remote:current\-backup\ remote:previous\-backup
|
|
|
|
|
rclone\ sync\ /path/to/files\ remote:current\-backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Options
|
|
|
|
|
.PP
|
|
|
|
|
Rclone has a number of options to control its behaviour.
|
|
|
|
|
.PP
|
|
|
|
|
Options which use TIME use the go time parser.
|
|
|
|
|
A duration string is a possibly signed sequence of decimal numbers, each
|
|
|
|
|
with optional fraction and a unit suffix, such as "300ms", "\-1.5h" or
|
|
|
|
|
"2h45m".
|
|
|
|
|
Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
|
|
|
|
.PP
|
|
|
|
|
Options which use SIZE use kByte by default.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
However a suffix of \f[C]b\f[] for bytes, \f[C]k\f[] for kBytes,
|
|
|
|
|
\f[C]M\f[] for MBytes and \f[C]G\f[] for GBytes may be used.
|
|
|
|
|
These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-\-bwlimit=SIZE
|
|
|
|
|
.PP
|
2016-06-18 17:29:53 +02:00
|
|
|
|
Bandwidth limit in kBytes/s, or use suffix b|k|M|G.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
The default is \f[C]0\f[] which means to not limit bandwidth.
|
|
|
|
|
.PP
|
|
|
|
|
For example to limit bandwidth usage to 10 MBytes/s use
|
|
|
|
|
\f[C]\-\-bwlimit\ 10M\f[]
|
|
|
|
|
.PP
|
|
|
|
|
This only limits the bandwidth of the data transfer, it doesn\[aq]t
|
|
|
|
|
limit the bandwith of the directory listings etc.
|
|
|
|
|
.SS \-\-checkers=N
|
|
|
|
|
.PP
|
|
|
|
|
The number of checkers to run in parallel.
|
|
|
|
|
Checkers do the equality checking of files during a sync.
|
|
|
|
|
For some storage systems (eg s3, swift, dropbox) this can take a
|
|
|
|
|
significant amount of time so they are run in parallel.
|
|
|
|
|
.PP
|
|
|
|
|
The default is to run 8 checkers in parallel.
|
|
|
|
|
.SS \-c, \-\-checksum
|
|
|
|
|
.PP
|
|
|
|
|
Normally rclone will look at modification time and size of files to see
|
|
|
|
|
if they are equal.
|
|
|
|
|
If you set this flag then rclone will check the file hash and size to
|
|
|
|
|
determine if files are equal.
|
|
|
|
|
.PP
|
|
|
|
|
This is useful when the remote doesn\[aq]t support setting modified time
|
|
|
|
|
and a more accurate sync is desired than just checking the file size.
|
|
|
|
|
.PP
|
|
|
|
|
This is very useful when transferring between remotes which store the
|
|
|
|
|
same hash type on the object, eg Drive and Swift.
|
|
|
|
|
For details of which remotes support which hash type see the table in
|
|
|
|
|
the overview section (http://rclone.org/overview/).
|
|
|
|
|
.PP
|
|
|
|
|
Eg \f[C]rclone\ \-\-checksum\ sync\ s3:/bucket\ swift:/bucket\f[] would
|
|
|
|
|
run much quicker than without the \f[C]\-\-checksum\f[] flag.
|
|
|
|
|
.PP
|
|
|
|
|
When using this flag, rclone won\[aq]t update mtimes of remote files if
|
|
|
|
|
they are incorrect as it would normally.
|
|
|
|
|
.SS \-\-config=CONFIG_FILE
|
|
|
|
|
.PP
|
|
|
|
|
Specify the location of the rclone config file.
|
|
|
|
|
Normally this is in your home directory as a file called
|
|
|
|
|
\f[C]\&.rclone.conf\f[].
|
|
|
|
|
If you run \f[C]rclone\ \-h\f[] and look at the help for the
|
|
|
|
|
\f[C]\-\-config\f[] option you will see where the default location is
|
|
|
|
|
for you.
|
|
|
|
|
Use this flag to override the config location, eg
|
|
|
|
|
\f[C]rclone\ \-\-config=".myconfig"\ .config\f[].
|
|
|
|
|
.SS \-\-contimeout=TIME
|
|
|
|
|
.PP
|
|
|
|
|
Set the connection timeout.
|
|
|
|
|
This should be in go time format which looks like \f[C]5s\f[] for 5
|
|
|
|
|
seconds, \f[C]10m\f[] for 10 minutes, or \f[C]3h30m\f[].
|
|
|
|
|
.PP
|
|
|
|
|
The connection timeout is the amount of time rclone will wait for a
|
|
|
|
|
connection to go through to a remote object storage system.
|
|
|
|
|
It is \f[C]1m\f[] by default.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.SS \-\-dedupe\-mode MODE
|
|
|
|
|
.PP
|
|
|
|
|
Mode to run dedupe command in.
|
|
|
|
|
One of \f[C]interactive\f[], \f[C]skip\f[], \f[C]first\f[],
|
|
|
|
|
\f[C]newest\f[], \f[C]oldest\f[], \f[C]rename\f[].
|
|
|
|
|
The default is \f[C]interactive\f[].
|
|
|
|
|
See the dedupe command for more information as to what these options
|
|
|
|
|
mean.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-n, \-\-dry\-run
|
|
|
|
|
.PP
|
|
|
|
|
Do a trial run with no permanent changes.
|
|
|
|
|
Use this to see what rclone would do without actually doing it.
|
|
|
|
|
Useful when setting up the \f[C]sync\f[] command which deletes files in
|
|
|
|
|
the destination.
|
|
|
|
|
.SS \-\-ignore\-existing
|
|
|
|
|
.PP
|
|
|
|
|
Using this option will make rclone unconditionally skip all files that
|
|
|
|
|
exist on the destination, no matter the content of these files.
|
|
|
|
|
.PP
|
|
|
|
|
While this isn\[aq]t a generally recommended option, it can be useful in
|
|
|
|
|
cases where your files change due to encryption.
|
|
|
|
|
However, it cannot correct partial transfers in case a transfer was
|
|
|
|
|
interrupted.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.SS \-\-ignore\-size
|
|
|
|
|
.PP
|
|
|
|
|
Normally rclone will look at modification time and size of files to see
|
|
|
|
|
if they are equal.
|
|
|
|
|
If you set this flag then rclone will check only the modification time.
|
|
|
|
|
If \f[C]\-\-checksum\f[] is set then it only checks the checksum.
|
|
|
|
|
.PP
|
|
|
|
|
It will also cause rclone to skip verifying the sizes are the same after
|
|
|
|
|
transfer.
|
|
|
|
|
.PP
|
|
|
|
|
This can be useful for transferring files to and from onedrive which
|
|
|
|
|
occasionally misreports the size of image files (see
|
|
|
|
|
#399 (https://github.com/ncw/rclone/issues/399) for more info).
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.SS \-I, \-\-ignore\-times
|
|
|
|
|
.PP
|
|
|
|
|
Using this option will cause rclone to unconditionally upload all files
|
|
|
|
|
regardless of the state of files on the destination.
|
|
|
|
|
.PP
|
|
|
|
|
Normally rclone would skip any files that have the same modification
|
|
|
|
|
time and are the same size (or have the same checksum if using
|
|
|
|
|
\f[C]\-\-checksum\f[]).
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-\-log\-file=FILE
|
|
|
|
|
.PP
|
|
|
|
|
Log all of rclone\[aq]s output to FILE.
|
|
|
|
|
This is not active by default.
|
|
|
|
|
This can be useful for tracking down problems with syncs in combination
|
|
|
|
|
with the \f[C]\-v\f[] flag.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
See the Logging section for more info.
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.SS \-\-low\-level\-retries NUMBER
|
|
|
|
|
.PP
|
|
|
|
|
This controls the number of low level retries rclone does.
|
|
|
|
|
.PP
|
|
|
|
|
A low level retry is used to retry a failing operation \- typically one
|
|
|
|
|
HTTP request.
|
|
|
|
|
This might be uploading a chunk of a big file for example.
|
|
|
|
|
You will see low level retries in the log with the \f[C]\-v\f[] flag.
|
|
|
|
|
.PP
|
|
|
|
|
This shouldn\[aq]t need to be changed from the default in normal
|
|
|
|
|
operations, however if you get a lot of low level retries you may wish
|
|
|
|
|
to reduce the value so rclone moves on to a high level retry (see the
|
|
|
|
|
\f[C]\-\-retries\f[] flag) quicker.
|
|
|
|
|
.PP
|
|
|
|
|
Disable low level retries with \f[C]\-\-low\-level\-retries\ 1\f[].
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.SS \-\-max\-depth=N
|
|
|
|
|
.PP
|
|
|
|
|
This modifies the recursion depth for all the commands except purge.
|
|
|
|
|
.PP
|
|
|
|
|
So if you do \f[C]rclone\ \-\-max\-depth\ 1\ ls\ remote:path\f[] you
|
|
|
|
|
will see only the files in the top level directory.
|
|
|
|
|
Using \f[C]\-\-max\-depth\ 2\f[] means you will see all the files in
|
|
|
|
|
first two directory levels and so on.
|
|
|
|
|
.PP
|
|
|
|
|
For historical reasons the \f[C]lsd\f[] command defaults to using a
|
|
|
|
|
\f[C]\-\-max\-depth\f[] of 1 \- you can override this with the command
|
|
|
|
|
line flag.
|
|
|
|
|
.PP
|
|
|
|
|
You can use this command to disable recursion (with
|
|
|
|
|
\f[C]\-\-max\-depth\ 1\f[]).
|
|
|
|
|
.PP
|
|
|
|
|
Note that if you use this with \f[C]sync\f[] and
|
|
|
|
|
\f[C]\-\-delete\-excluded\f[] the files not recursed through are
|
|
|
|
|
considered excluded and will be deleted on the destination.
|
|
|
|
|
Test first with \f[C]\-\-dry\-run\f[] if you are not sure what will
|
|
|
|
|
happen.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-\-modify\-window=TIME
|
|
|
|
|
.PP
|
|
|
|
|
When checking whether a file has been modified, this is the maximum
|
|
|
|
|
allowed time difference that a file can have and still be considered
|
|
|
|
|
equivalent.
|
|
|
|
|
.PP
|
|
|
|
|
The default is \f[C]1ns\f[] unless this is overridden by a remote.
|
|
|
|
|
For example OS X only stores modification times to the nearest second so
|
|
|
|
|
if you are reading and writing to an OS X filing system this will be
|
|
|
|
|
\f[C]1s\f[] by default.
|
|
|
|
|
.PP
|
|
|
|
|
This command line flag allows you to override that computed default.
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.SS \-\-no\-gzip\-encoding
|
|
|
|
|
.PP
|
|
|
|
|
Don\[aq]t set \f[C]Accept\-Encoding:\ gzip\f[].
|
|
|
|
|
This means that rclone won\[aq]t ask the server for compressed files
|
|
|
|
|
automatically.
|
|
|
|
|
Useful if you\[aq]ve set the server to return files with
|
|
|
|
|
\f[C]Content\-Encoding:\ gzip\f[] but you uploaded compressed files.
|
|
|
|
|
.PP
|
|
|
|
|
There is no need to set this in normal operation, and doing so will
|
|
|
|
|
decrease the network transfer efficiency of rclone.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-q, \-\-quiet
|
|
|
|
|
.PP
|
|
|
|
|
Normally rclone outputs stats and a completion message.
|
|
|
|
|
If you set this flag it will make as little output as possible.
|
|
|
|
|
.SS \-\-retries int
|
|
|
|
|
.PP
|
|
|
|
|
Retry the entire sync if it fails this many times it fails (default 3).
|
|
|
|
|
.PP
|
|
|
|
|
Some remotes can be unreliable and a few retries helps pick up the files
|
|
|
|
|
which didn\[aq]t get transferred because of errors.
|
|
|
|
|
.PP
|
|
|
|
|
Disable retries with \f[C]\-\-retries\ 1\f[].
|
|
|
|
|
.SS \-\-size\-only
|
|
|
|
|
.PP
|
|
|
|
|
Normally rclone will look at modification time and size of files to see
|
|
|
|
|
if they are equal.
|
|
|
|
|
If you set this flag then rclone will check only the size.
|
|
|
|
|
.PP
|
|
|
|
|
This can be useful transferring files from dropbox which have been
|
|
|
|
|
modified by the desktop sync client which doesn\[aq]t set checksums of
|
|
|
|
|
modification times in the same way as rclone.
|
|
|
|
|
.SS \-\-stats=TIME
|
|
|
|
|
.PP
|
|
|
|
|
Rclone will print stats at regular intervals to show its progress.
|
|
|
|
|
.PP
|
|
|
|
|
This sets the interval.
|
|
|
|
|
.PP
|
|
|
|
|
The default is \f[C]1m\f[].
|
|
|
|
|
Use 0 to disable.
|
|
|
|
|
.SS \-\-delete\-(before,during,after)
|
|
|
|
|
.PP
|
|
|
|
|
This option allows you to specify when files on your destination are
|
|
|
|
|
deleted when you sync folders.
|
|
|
|
|
.PP
|
|
|
|
|
Specifying the value \f[C]\-\-delete\-before\f[] will delete all files
|
|
|
|
|
present on the destination, but not on the source \f[I]before\f[]
|
|
|
|
|
starting the transfer of any new or updated files.
|
|
|
|
|
.PP
|
|
|
|
|
Specifying \f[C]\-\-delete\-during\f[] (default value) will delete files
|
|
|
|
|
while checking and uploading files.
|
|
|
|
|
This is usually the fastest option.
|
|
|
|
|
.PP
|
|
|
|
|
Specifying \f[C]\-\-delete\-after\f[] will delay deletion of files until
|
|
|
|
|
all new/updated files have been successfully transfered.
|
|
|
|
|
.SS \-\-timeout=TIME
|
|
|
|
|
.PP
|
|
|
|
|
This sets the IO idle timeout.
|
|
|
|
|
If a transfer has started but then becomes idle for this long it is
|
|
|
|
|
considered broken and disconnected.
|
|
|
|
|
.PP
|
|
|
|
|
The default is \f[C]5m\f[].
|
|
|
|
|
Set to 0 to disable.
|
|
|
|
|
.SS \-\-transfers=N
|
|
|
|
|
.PP
|
|
|
|
|
The number of file transfers to run in parallel.
|
|
|
|
|
It can sometimes be useful to set this to a smaller number if the remote
|
|
|
|
|
is giving a lot of timeouts or bigger if you have lots of bandwidth and
|
|
|
|
|
a fast remote.
|
|
|
|
|
.PP
|
|
|
|
|
The default is to run 4 file transfers in parallel.
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.SS \-u, \-\-update
|
|
|
|
|
.PP
|
|
|
|
|
This forces rclone to skip any files which exist on the destination and
|
|
|
|
|
have a modified time that is newer than the source file.
|
|
|
|
|
.PP
|
|
|
|
|
If an existing destination file has a modification time equal (within
|
|
|
|
|
the computed modify window precision) to the source file\[aq]s, it will
|
|
|
|
|
be updated if the sizes are different.
|
|
|
|
|
.PP
|
|
|
|
|
On remotes which don\[aq]t support mod time directly the time checked
|
|
|
|
|
will be the uploaded time.
|
|
|
|
|
This means that if uploading to one of these remoes, rclone will skip
|
|
|
|
|
any files which exist on the destination and have an uploaded time that
|
|
|
|
|
is newer than the modification time of the source file.
|
|
|
|
|
.PP
|
|
|
|
|
This can be useful when transferring to a remote which doesn\[aq]t
|
|
|
|
|
support mod times directly as it is more accurate than a
|
|
|
|
|
\f[C]\-\-size\-only\f[] check and faster than using
|
|
|
|
|
\f[C]\-\-checksum\f[].
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-v, \-\-verbose
|
|
|
|
|
.PP
|
|
|
|
|
If you set this flag, rclone will become very verbose telling you about
|
|
|
|
|
every file it considers and transfers.
|
|
|
|
|
.PP
|
|
|
|
|
Very useful for debugging.
|
|
|
|
|
.SS \-V, \-\-version
|
|
|
|
|
.PP
|
|
|
|
|
Prints the version number
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.SS Configuration Encryption
|
|
|
|
|
.PP
|
|
|
|
|
Your configuration file contains information for logging in to your
|
|
|
|
|
cloud services.
|
|
|
|
|
This means that you should keep your \f[C]\&.rclone.conf\f[] file in a
|
|
|
|
|
secure location.
|
|
|
|
|
.PP
|
|
|
|
|
If you are in an environment where that isn\[aq]t possible, you can add
|
|
|
|
|
a password to your configuration.
|
|
|
|
|
This means that you will have to enter the password every time you start
|
|
|
|
|
rclone.
|
|
|
|
|
.PP
|
|
|
|
|
To add a password to your rclone configuration, execute
|
|
|
|
|
\f[C]rclone\ config\f[].
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
>rclone\ config
|
|
|
|
|
Current\ remotes:
|
|
|
|
|
|
|
|
|
|
e)\ Edit\ existing\ remote
|
|
|
|
|
n)\ New\ remote
|
|
|
|
|
d)\ Delete\ remote
|
|
|
|
|
s)\ Set\ configuration\ password
|
|
|
|
|
q)\ Quit\ config
|
|
|
|
|
e/n/d/s/q>
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Go into \f[C]s\f[], Set configuration password:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
e/n/d/s/q>\ s
|
|
|
|
|
Your\ configuration\ is\ not\ encrypted.
|
|
|
|
|
If\ you\ add\ a\ password,\ you\ will\ protect\ your\ login\ information\ to\ cloud\ services.
|
|
|
|
|
a)\ Add\ Password
|
|
|
|
|
q)\ Quit\ to\ main\ menu
|
|
|
|
|
a/q>\ a
|
|
|
|
|
Enter\ NEW\ configuration\ password:
|
2016-06-18 17:29:53 +02:00
|
|
|
|
password:
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Confirm\ NEW\ password:
|
2016-06-18 17:29:53 +02:00
|
|
|
|
password:
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Password\ set
|
|
|
|
|
Your\ configuration\ is\ encrypted.
|
|
|
|
|
c)\ Change\ Password
|
|
|
|
|
u)\ Unencrypt\ configuration
|
|
|
|
|
q)\ Quit\ to\ main\ menu
|
|
|
|
|
c/u/q>
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Your configuration is now encrypted, and every time you start rclone you
|
|
|
|
|
will now be asked for the password.
|
|
|
|
|
In the same menu you can change the password or completely remove
|
|
|
|
|
encryption from your configuration.
|
|
|
|
|
.PP
|
|
|
|
|
There is no way to recover the configuration if you lose your password.
|
|
|
|
|
.PP
|
|
|
|
|
rclone uses nacl
|
|
|
|
|
secretbox (https://godoc.org/golang.org/x/crypto/nacl/secretbox) which
|
2016-06-18 17:29:53 +02:00
|
|
|
|
in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your
|
2016-03-01 10:00:01 +01:00
|
|
|
|
configuration with secret\-key cryptography.
|
|
|
|
|
The password is SHA\-256 hashed, which produces the key for secretbox.
|
|
|
|
|
The hashed password is not stored.
|
|
|
|
|
.PP
|
|
|
|
|
While this provides very good security, we do not recommend storing your
|
2016-06-18 17:29:53 +02:00
|
|
|
|
encrypted rclone configuration in public if it contains sensitive
|
2016-03-01 10:00:01 +01:00
|
|
|
|
information, maybe except if you use a very strong password.
|
|
|
|
|
.PP
|
|
|
|
|
If it is safe in your environment, you can set the
|
|
|
|
|
\f[C]RCLONE_CONFIG_PASS\f[] environment variable to contain your
|
|
|
|
|
password, in which case it will be used for decrypting the
|
|
|
|
|
configuration.
|
|
|
|
|
.PP
|
|
|
|
|
If you are running rclone inside a script, you might want to disable
|
|
|
|
|
password prompts.
|
|
|
|
|
To do that, pass the parameter \f[C]\-\-ask\-password=false\f[] to
|
|
|
|
|
rclone.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
This will make rclone fail instead of asking for a password if
|
2016-03-01 10:00:01 +01:00
|
|
|
|
\f[C]RCLONE_CONFIG_PASS\f[] doesn\[aq]t contain a valid password.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Developer options
|
|
|
|
|
.PP
|
|
|
|
|
These options are useful when developing or debugging rclone.
|
|
|
|
|
There are also some more remote specific options which aren\[aq]t
|
|
|
|
|
documented here which are used for testing.
|
|
|
|
|
These start with remote name eg \f[C]\-\-drive\-test\-option\f[] \- see
|
|
|
|
|
the docs for the remote in question.
|
|
|
|
|
.SS \-\-cpuprofile=FILE
|
|
|
|
|
.PP
|
|
|
|
|
Write CPU profile to file.
|
|
|
|
|
This can be analysed with \f[C]go\ tool\ pprof\f[].
|
|
|
|
|
.SS \-\-dump\-bodies
|
|
|
|
|
.PP
|
|
|
|
|
Dump HTTP headers and bodies \- may contain sensitive info.
|
|
|
|
|
Can be very verbose.
|
|
|
|
|
Useful for debugging only.
|
|
|
|
|
.SS \-\-dump\-filters
|
|
|
|
|
.PP
|
|
|
|
|
Dump the filters to the output.
|
|
|
|
|
Useful to see exactly what include and exclude options are filtering on.
|
|
|
|
|
.SS \-\-dump\-headers
|
|
|
|
|
.PP
|
|
|
|
|
Dump HTTP headers \- may contain sensitive info.
|
|
|
|
|
Can be very verbose.
|
|
|
|
|
Useful for debugging only.
|
|
|
|
|
.SS \-\-memprofile=FILE
|
|
|
|
|
.PP
|
|
|
|
|
Write memory profile to file.
|
|
|
|
|
This can be analysed with \f[C]go\ tool\ pprof\f[].
|
|
|
|
|
.SS \-\-no\-check\-certificate=true/false
|
|
|
|
|
.PP
|
|
|
|
|
\f[C]\-\-no\-check\-certificate\f[] controls whether a client verifies
|
|
|
|
|
the server\[aq]s certificate chain and host name.
|
|
|
|
|
If \f[C]\-\-no\-check\-certificate\f[] is true, TLS accepts any
|
|
|
|
|
certificate presented by the server and any host name in that
|
|
|
|
|
certificate.
|
|
|
|
|
In this mode, TLS is susceptible to man\-in\-the\-middle attacks.
|
|
|
|
|
.PP
|
|
|
|
|
This option defaults to \f[C]false\f[].
|
|
|
|
|
.PP
|
|
|
|
|
\f[B]This should be used only for testing.\f[]
|
|
|
|
|
.SS Filtering
|
|
|
|
|
.PP
|
|
|
|
|
For the filtering options
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-delete\-excluded\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-filter\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-filter\-from\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-exclude\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-exclude\-from\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-include\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-include\-from\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-files\-from\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-min\-size\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-max\-size\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-min\-age\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-max\-age\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dump\-filters\f[]
|
|
|
|
|
.PP
|
|
|
|
|
See the filtering section (http://rclone.org/filtering/).
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.SS Logging
|
|
|
|
|
.PP
|
|
|
|
|
rclone has 3 levels of logging, \f[C]Error\f[], \f[C]Info\f[] and
|
|
|
|
|
\f[C]Debug\f[].
|
|
|
|
|
.PP
|
|
|
|
|
By default rclone logs \f[C]Error\f[] and \f[C]Info\f[] to standard
|
|
|
|
|
error and \f[C]Debug\f[] to standard output.
|
|
|
|
|
This means you can redirect standard output and standard error to
|
|
|
|
|
different places.
|
|
|
|
|
.PP
|
|
|
|
|
By default rclone will produce \f[C]Error\f[] and \f[C]Info\f[] level
|
|
|
|
|
messages.
|
|
|
|
|
.PP
|
|
|
|
|
If you use the \f[C]\-q\f[] flag, rclone will only produce
|
|
|
|
|
\f[C]Error\f[] messages.
|
|
|
|
|
.PP
|
|
|
|
|
If you use the \f[C]\-v\f[] flag, rclone will produce \f[C]Error\f[],
|
|
|
|
|
\f[C]Info\f[] and \f[C]Debug\f[] messages.
|
|
|
|
|
.PP
|
|
|
|
|
If you use the \f[C]\-\-log\-file=FILE\f[] option, rclone will redirect
|
|
|
|
|
\f[C]Error\f[], \f[C]Info\f[] and \f[C]Debug\f[] messages along with
|
|
|
|
|
standard error to FILE.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Exit Code
|
|
|
|
|
.PP
|
|
|
|
|
If any errors occurred during the command, rclone will set a non zero
|
|
|
|
|
exit code.
|
|
|
|
|
This allows scripts to detect when rclone operations have failed.
|
|
|
|
|
.SH Configuring rclone on a remote / headless machine
|
|
|
|
|
.PP
|
|
|
|
|
Some of the configurations (those involving oauth2) require an Internet
|
|
|
|
|
connected web browser.
|
|
|
|
|
.PP
|
|
|
|
|
If you are trying to set rclone up on a remote or headless box with no
|
|
|
|
|
browser available on it (eg a NAS or a server in a datacenter) then you
|
|
|
|
|
will need to use an alternative means of configuration.
|
|
|
|
|
There are two ways of doing it, described below.
|
|
|
|
|
.SS Configuring using rclone authorize
|
|
|
|
|
.PP
|
|
|
|
|
On the headless box
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\&...
|
|
|
|
|
Remote\ config
|
|
|
|
|
Use\ auto\ config?
|
|
|
|
|
\ *\ Say\ Y\ if\ not\ sure
|
|
|
|
|
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine
|
|
|
|
|
y)\ Yes
|
|
|
|
|
n)\ No
|
|
|
|
|
y/n>\ n
|
|
|
|
|
For\ this\ to\ work,\ you\ will\ need\ rclone\ available\ on\ a\ machine\ that\ has\ a\ web\ browser\ available.
|
|
|
|
|
Execute\ the\ following\ on\ your\ machine:
|
|
|
|
|
\ \ \ \ rclone\ authorize\ "amazon\ cloud\ drive"
|
|
|
|
|
Then\ paste\ the\ result\ below:
|
|
|
|
|
result>
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then on your main desktop machine
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ authorize\ "amazon\ cloud\ drive"
|
|
|
|
|
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
|
|
|
|
|
Log\ in\ and\ authorize\ rclone\ for\ access
|
|
|
|
|
Waiting\ for\ code...
|
|
|
|
|
Got\ code
|
|
|
|
|
Paste\ the\ following\ into\ your\ remote\ machine\ \-\-\->
|
|
|
|
|
SECRET_TOKEN
|
|
|
|
|
<\-\-\-End\ paste
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then back to the headless box, paste in the code
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
result>\ SECRET_TOKEN
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[acd12]
|
|
|
|
|
client_id\ =\
|
|
|
|
|
client_secret\ =\
|
|
|
|
|
token\ =\ SECRET_TOKEN
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Configuring by copying the config file
|
|
|
|
|
.PP
|
|
|
|
|
Rclone stores all of its config in a single configuration file.
|
|
|
|
|
This can easily be copied to configure a remote rclone.
|
|
|
|
|
.PP
|
|
|
|
|
So first configure rclone on your desktop machine
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
to set up the config file.
|
|
|
|
|
.PP
|
|
|
|
|
Find the config file by running \f[C]rclone\ \-h\f[] and looking for the
|
|
|
|
|
help for the \f[C]\-\-config\f[] option
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
$\ rclone\ \-h
|
|
|
|
|
[snip]
|
|
|
|
|
\ \ \ \ \ \ \-\-config="/home/user/.rclone.conf":\ Config\ file.
|
|
|
|
|
[snip]
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and
|
|
|
|
|
place it in the correct place (use \f[C]rclone\ \-h\f[] on the remote
|
|
|
|
|
box to find out where).
|
|
|
|
|
.SH Filtering, includes and excludes
|
|
|
|
|
.PP
|
|
|
|
|
Rclone has a sophisticated set of include and exclude rules.
|
|
|
|
|
Some of these are based on patterns and some on other things like file
|
|
|
|
|
size.
|
|
|
|
|
.PP
|
|
|
|
|
The filters are applied for the \f[C]copy\f[], \f[C]sync\f[],
|
|
|
|
|
\f[C]move\f[], \f[C]ls\f[], \f[C]lsl\f[], \f[C]md5sum\f[],
|
|
|
|
|
\f[C]sha1sum\f[], \f[C]size\f[], \f[C]delete\f[] and \f[C]check\f[]
|
|
|
|
|
operations.
|
|
|
|
|
Note that \f[C]purge\f[] does not obey the filters.
|
|
|
|
|
.PP
|
|
|
|
|
Each path as it passes through rclone is matched against the include and
|
2016-03-01 10:00:01 +01:00
|
|
|
|
exclude rules like \f[C]\-\-include\f[], \f[C]\-\-exclude\f[],
|
|
|
|
|
\f[C]\-\-include\-from\f[], \f[C]\-\-exclude\-from\f[],
|
|
|
|
|
\f[C]\-\-filter\f[], or \f[C]\-\-filter\-from\f[].
|
|
|
|
|
The simplest way to try them out is using the \f[C]ls\f[] command, or
|
|
|
|
|
\f[C]\-\-dry\-run\f[] together with \f[C]\-v\f[].
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.PP
|
|
|
|
|
\f[B]Important\f[] Due to limitations of the command line parser you can
|
|
|
|
|
only use any of these options once \- if you duplicate them then rclone
|
|
|
|
|
will use the last one only.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Patterns
|
|
|
|
|
.PP
|
|
|
|
|
The patterns used to match files for inclusion or exclusion are based on
|
|
|
|
|
"file globs" as used by the unix shell.
|
|
|
|
|
.PP
|
|
|
|
|
If the pattern starts with a \f[C]/\f[] then it only matches at the top
|
2016-03-01 10:00:01 +01:00
|
|
|
|
level of the directory tree, relative to the root of the remote.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
If it doesn\[aq]t start with \f[C]/\f[] then it is matched starting at
|
2016-03-01 10:00:01 +01:00
|
|
|
|
the \f[B]end of the path\f[], but it will only match a complete path
|
|
|
|
|
element:
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
file.jpg\ \ \-\ matches\ "file.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \-\ matches\ "directory/file.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "afile.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "directory/afile.jpg"
|
2016-03-01 10:00:01 +01:00
|
|
|
|
/file.jpg\ \-\ matches\ "file.jpg"\ in\ the\ root\ directory\ of\ the\ remote
|
2016-02-07 21:26:10 +01:00
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "afile.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "directory/file.jpg"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
2016-04-18 19:30:29 +02:00
|
|
|
|
\f[B]Important\f[] Note that you must use \f[C]/\f[] in patterns and not
|
|
|
|
|
\f[C]\\\f[] even if running on Windows.
|
|
|
|
|
.PP
|
2016-02-07 21:26:10 +01:00
|
|
|
|
A \f[C]*\f[] matches anything but not a \f[C]/\f[].
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
*.jpg\ \ \-\ matches\ "file.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \-\ matches\ "directory/file.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "file.jpg/something"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Use \f[C]**\f[] to match anything, including slashes (\f[C]/\f[]).
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
dir/**\ \-\ matches\ "dir/file.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \-\ matches\ "dir/dir1/dir2/file.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "directory/file.jpg"
|
|
|
|
|
\ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "adir/file.jpg"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
A \f[C]?\f[] matches any character except a slash \f[C]/\f[].
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
l?ss\ \ \-\ matches\ "less"
|
|
|
|
|
\ \ \ \ \ \ \-\ matches\ "lass"
|
|
|
|
|
\ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "floss"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
A \f[C][\f[] and \f[C]]\f[] together make a a character class, such as
|
|
|
|
|
\f[C][a\-z]\f[] or \f[C][aeiou]\f[] or \f[C][[:alpha:]]\f[].
|
|
|
|
|
See the go regexp docs (https://golang.org/pkg/regexp/syntax/) for more
|
|
|
|
|
info on these.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
h[ae]llo\ \-\ matches\ "hello"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \-\ matches\ "hallo"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "hullo"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
A \f[C]{\f[] and \f[C]}\f[] define a choice between elements.
|
|
|
|
|
It should contain a comma seperated list of patterns, any of which might
|
|
|
|
|
match.
|
|
|
|
|
These patterns can contain wildcards.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
{one,two}_potato\ \-\ matches\ "one_potato"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ matches\ "two_potato"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "three_potato"
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \-\ doesn\[aq]t\ match\ "_potato"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Special characters can be escaped with a \f[C]\\\f[] before them.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\\*.jpg\ \ \ \ \ \ \ \-\ matches\ "*.jpg"
|
|
|
|
|
\\\\.jpg\ \ \ \ \ \ \ \-\ matches\ "\\.jpg"
|
|
|
|
|
\\[one\\].jpg\ \ \-\ matches\ "[one].jpg"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.PP
|
|
|
|
|
Note also that rclone filter globs can only be used in one of the filter
|
|
|
|
|
command line flags, not in the specification of the remote, so
|
|
|
|
|
\f[C]rclone\ copy\ "remote:dir*.jpg"\ /path/to/dir\f[] won\[aq]t work \-
|
|
|
|
|
what is required is
|
|
|
|
|
\f[C]rclone\ \-\-include\ "*.jpg"\ copy\ remote:dir\ /path/to/dir\f[]
|
|
|
|
|
.SS Directories
|
|
|
|
|
.PP
|
|
|
|
|
Rclone keeps track of directories that could match any file patterns.
|
|
|
|
|
.PP
|
|
|
|
|
Eg if you add the include rule
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\\a\\*.jpg
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Rclone will synthesize the directory include rule
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\\a\\
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
If you put any rules which end in \f[C]\\\f[] then it will only match
|
|
|
|
|
directories.
|
|
|
|
|
.PP
|
|
|
|
|
Directory matches are \f[B]only\f[] used to optimise directory access
|
|
|
|
|
patterns \- you must still match the files that you want to match.
|
|
|
|
|
Directory matches won\[aq]t optimise anything on bucket based remotes
|
|
|
|
|
(eg s3, swift, google compute storage, b2) which don\[aq]t have a
|
|
|
|
|
concept of directory.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Differences between rsync and rclone patterns
|
|
|
|
|
.PP
|
|
|
|
|
Rclone implements bash style \f[C]{a,b,c}\f[] glob matching which rsync
|
|
|
|
|
doesn\[aq]t.
|
|
|
|
|
.PP
|
|
|
|
|
Rclone always does a wildcard match so \f[C]\\\f[] must always escape a
|
|
|
|
|
\f[C]\\\f[].
|
|
|
|
|
.SS How the rules are used
|
|
|
|
|
.PP
|
|
|
|
|
Rclone maintains a list of include rules and exclude rules.
|
|
|
|
|
.PP
|
|
|
|
|
Each file is matched in order against the list until it finds a match.
|
|
|
|
|
The file is then included or excluded according to the rule type.
|
|
|
|
|
.PP
|
|
|
|
|
If the matcher falls off the bottom of the list then the path is
|
|
|
|
|
included.
|
|
|
|
|
.PP
|
|
|
|
|
For example given the following rules, \f[C]+\f[] being include,
|
|
|
|
|
\f[C]\-\f[] being exclude,
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\-\ secret*.jpg
|
|
|
|
|
+\ *.jpg
|
|
|
|
|
+\ *.png
|
|
|
|
|
+\ file2.avi
|
|
|
|
|
\-\ *
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This would include
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]file1.jpg\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]file3.png\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]file2.avi\f[]
|
|
|
|
|
.PP
|
|
|
|
|
This would exclude
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]secret17.jpg\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
non \f[C]*.jpg\f[] and \f[C]*.png\f[]
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.PP
|
|
|
|
|
A similar process is done on directory entries before recursing into
|
|
|
|
|
them.
|
|
|
|
|
This only works on remotes which have a concept of directory (Eg local,
|
|
|
|
|
drive, onedrive, amazon cloud drive) and not on bucket based remotes (eg
|
|
|
|
|
s3, swift, google compute storage, b2).
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Adding filtering rules
|
|
|
|
|
.PP
|
|
|
|
|
Filtering rules are added with the following command line flags.
|
|
|
|
|
.SS \f[C]\-\-exclude\f[] \- Exclude files matching pattern
|
|
|
|
|
.PP
|
|
|
|
|
Add a single exclude rule with \f[C]\-\-exclude\f[].
|
|
|
|
|
.PP
|
|
|
|
|
Eg \f[C]\-\-exclude\ *.bak\f[] to exclude all bak files from the sync.
|
|
|
|
|
.SS \f[C]\-\-exclude\-from\f[] \- Read exclude patterns from file
|
|
|
|
|
.PP
|
|
|
|
|
Add exclude rules from a file.
|
|
|
|
|
.PP
|
|
|
|
|
Prepare a file like this \f[C]exclude\-file.txt\f[]
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
#\ a\ sample\ exclude\ rule\ file
|
|
|
|
|
*.bak
|
|
|
|
|
file2.jpg
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then use as \f[C]\-\-exclude\-from\ exclude\-file.txt\f[].
|
|
|
|
|
This will sync all files except those ending in \f[C]bak\f[] and
|
|
|
|
|
\f[C]file2.jpg\f[].
|
|
|
|
|
.PP
|
|
|
|
|
This is useful if you have a lot of rules.
|
|
|
|
|
.SS \f[C]\-\-include\f[] \- Include files matching pattern
|
|
|
|
|
.PP
|
|
|
|
|
Add a single include rule with \f[C]\-\-include\f[].
|
|
|
|
|
.PP
|
|
|
|
|
Eg \f[C]\-\-include\ *.{png,jpg}\f[] to include all \f[C]png\f[] and
|
|
|
|
|
\f[C]jpg\f[] files in the backup and no others.
|
|
|
|
|
.PP
|
|
|
|
|
This adds an implicit \f[C]\-\-exclude\ *\f[] at the very end of the
|
|
|
|
|
filter list.
|
|
|
|
|
This means you can mix \f[C]\-\-include\f[] and
|
|
|
|
|
\f[C]\-\-include\-from\f[] with the other filters (eg
|
|
|
|
|
\f[C]\-\-exclude\f[]) but you must include all the files you want in the
|
|
|
|
|
include statement.
|
|
|
|
|
If this doesn\[aq]t provide enough flexibility then you must use
|
|
|
|
|
\f[C]\-\-filter\-from\f[].
|
|
|
|
|
.SS \f[C]\-\-include\-from\f[] \- Read include patterns from file
|
|
|
|
|
.PP
|
|
|
|
|
Add include rules from a file.
|
|
|
|
|
.PP
|
|
|
|
|
Prepare a file like this \f[C]include\-file.txt\f[]
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
#\ a\ sample\ include\ rule\ file
|
|
|
|
|
*.jpg
|
|
|
|
|
*.png
|
|
|
|
|
file2.avi
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then use as \f[C]\-\-include\-from\ include\-file.txt\f[].
|
|
|
|
|
This will sync all \f[C]jpg\f[], \f[C]png\f[] files and
|
|
|
|
|
\f[C]file2.avi\f[].
|
|
|
|
|
.PP
|
|
|
|
|
This is useful if you have a lot of rules.
|
|
|
|
|
.PP
|
|
|
|
|
This adds an implicit \f[C]\-\-exclude\ *\f[] at the very end of the
|
|
|
|
|
filter list.
|
|
|
|
|
This means you can mix \f[C]\-\-include\f[] and
|
|
|
|
|
\f[C]\-\-include\-from\f[] with the other filters (eg
|
|
|
|
|
\f[C]\-\-exclude\f[]) but you must include all the files you want in the
|
|
|
|
|
include statement.
|
|
|
|
|
If this doesn\[aq]t provide enough flexibility then you must use
|
|
|
|
|
\f[C]\-\-filter\-from\f[].
|
|
|
|
|
.SS \f[C]\-\-filter\f[] \- Add a file\-filtering rule
|
|
|
|
|
.PP
|
|
|
|
|
This can be used to add a single include or exclude rule.
|
|
|
|
|
Include rules start with \f[C]+\f[] and exclude rules start with
|
|
|
|
|
\f[C]\-\f[].
|
|
|
|
|
A special rule called \f[C]!\f[] can be used to clear the existing
|
|
|
|
|
rules.
|
|
|
|
|
.PP
|
|
|
|
|
Eg \f[C]\-\-filter\ "\-\ *.bak"\f[] to exclude all bak files from the
|
|
|
|
|
sync.
|
|
|
|
|
.SS \f[C]\-\-filter\-from\f[] \- Read filtering patterns from a file
|
|
|
|
|
.PP
|
|
|
|
|
Add include/exclude rules from a file.
|
|
|
|
|
.PP
|
|
|
|
|
Prepare a file like this \f[C]filter\-file.txt\f[]
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
#\ a\ sample\ exclude\ rule\ file
|
|
|
|
|
\-\ secret*.jpg
|
|
|
|
|
+\ *.jpg
|
|
|
|
|
+\ *.png
|
|
|
|
|
+\ file2.avi
|
|
|
|
|
#\ exclude\ everything\ else
|
|
|
|
|
\-\ *
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then use as \f[C]\-\-filter\-from\ filter\-file.txt\f[].
|
|
|
|
|
The rules are processed in the order that they are defined.
|
|
|
|
|
.PP
|
|
|
|
|
This example will include all \f[C]jpg\f[] and \f[C]png\f[] files,
|
|
|
|
|
exclude any files matching \f[C]secret*.jpg\f[] and include
|
|
|
|
|
\f[C]file2.avi\f[].
|
|
|
|
|
Everything else will be excluded from the sync.
|
|
|
|
|
.SS \f[C]\-\-files\-from\f[] \- Read list of source\-file names
|
|
|
|
|
.PP
|
|
|
|
|
This reads a list of file names from the file passed in and
|
|
|
|
|
\f[B]only\f[] these files are transferred.
|
|
|
|
|
The filtering rules are ignored completely if you use this option.
|
|
|
|
|
.PP
|
|
|
|
|
Prepare a file like this \f[C]files\-from.txt\f[]
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
#\ comment
|
|
|
|
|
file1.jpg
|
|
|
|
|
file2.jpg
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then use as \f[C]\-\-files\-from\ files\-from.txt\f[].
|
|
|
|
|
This will only transfer \f[C]file1.jpg\f[] and \f[C]file2.jpg\f[]
|
|
|
|
|
providing they exist.
|
|
|
|
|
.SS \f[C]\-\-min\-size\f[] \- Don\[aq]t transfer any file smaller than
|
|
|
|
|
this
|
|
|
|
|
.PP
|
|
|
|
|
This option controls the minimum size file which will be transferred.
|
|
|
|
|
This defaults to \f[C]kBytes\f[] but a suffix of \f[C]k\f[], \f[C]M\f[],
|
|
|
|
|
or \f[C]G\f[] can be used.
|
|
|
|
|
.PP
|
|
|
|
|
For example \f[C]\-\-min\-size\ 50k\f[] means no files smaller than
|
|
|
|
|
50kByte will be transferred.
|
|
|
|
|
.SS \f[C]\-\-max\-size\f[] \- Don\[aq]t transfer any file larger than
|
|
|
|
|
this
|
|
|
|
|
.PP
|
|
|
|
|
This option controls the maximum size file which will be transferred.
|
|
|
|
|
This defaults to \f[C]kBytes\f[] but a suffix of \f[C]k\f[], \f[C]M\f[],
|
|
|
|
|
or \f[C]G\f[] can be used.
|
|
|
|
|
.PP
|
|
|
|
|
For example \f[C]\-\-max\-size\ 1G\f[] means no files larger than 1GByte
|
|
|
|
|
will be transferred.
|
|
|
|
|
.SS \f[C]\-\-max\-age\f[] \- Don\[aq]t transfer any file older than this
|
|
|
|
|
.PP
|
|
|
|
|
This option controls the maximum age of files to transfer.
|
|
|
|
|
Give in seconds or with a suffix of:
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]ms\f[] \- Milliseconds
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]s\f[] \- Seconds
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]m\f[] \- Minutes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]h\f[] \- Hours
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]d\f[] \- Days
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]w\f[] \- Weeks
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]M\f[] \- Months
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]y\f[] \- Years
|
|
|
|
|
.PP
|
|
|
|
|
For example \f[C]\-\-max\-age\ 2d\f[] means no files older than 2 days
|
|
|
|
|
will be transferred.
|
|
|
|
|
.SS \f[C]\-\-min\-age\f[] \- Don\[aq]t transfer any file younger than
|
|
|
|
|
this
|
|
|
|
|
.PP
|
|
|
|
|
This option controls the minimum age of files to transfer.
|
|
|
|
|
Give in seconds or with a suffix (see \f[C]\-\-max\-age\f[] for list of
|
|
|
|
|
suffixes)
|
|
|
|
|
.PP
|
|
|
|
|
For example \f[C]\-\-min\-age\ 2d\f[] means no files younger than 2 days
|
|
|
|
|
will be transferred.
|
|
|
|
|
.SS \f[C]\-\-delete\-excluded\f[] \- Delete files on dest excluded from
|
|
|
|
|
sync
|
|
|
|
|
.PP
|
|
|
|
|
\f[B]Important\f[] this flag is dangerous \- use with
|
|
|
|
|
\f[C]\-\-dry\-run\f[] and \f[C]\-v\f[] first.
|
|
|
|
|
.PP
|
|
|
|
|
When doing \f[C]rclone\ sync\f[] this will delete any files which are
|
|
|
|
|
excluded from the sync on the destination.
|
|
|
|
|
.PP
|
|
|
|
|
If for example you did a sync from \f[C]A\f[] to \f[C]B\f[] without the
|
|
|
|
|
\f[C]\-\-min\-size\ 50k\f[] flag
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ A:\ B:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then you repeated it like this with the \f[C]\-\-delete\-excluded\f[]
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ \-\-min\-size\ 50k\ \-\-delete\-excluded\ sync\ A:\ B:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This would delete all files on \f[C]B\f[] which are less than 50 kBytes
|
|
|
|
|
as these are now excluded from the sync.
|
|
|
|
|
.PP
|
|
|
|
|
Always test first with \f[C]\-\-dry\-run\f[] and \f[C]\-v\f[] before
|
|
|
|
|
using this flag.
|
|
|
|
|
.SS \f[C]\-\-dump\-filters\f[] \- dump the filters to the output
|
|
|
|
|
.PP
|
|
|
|
|
This dumps the defined filters to the output as regular expressions.
|
|
|
|
|
.PP
|
|
|
|
|
Useful for debugging.
|
|
|
|
|
.SS Quoting shell metacharacters
|
|
|
|
|
.PP
|
|
|
|
|
The examples above may not work verbatim in your shell as they have
|
|
|
|
|
shell metacharacters in them (eg \f[C]*\f[]), and may require quoting.
|
|
|
|
|
.PP
|
|
|
|
|
Eg linux, OSX
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-include\ \\*.jpg\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-include\ \[aq]*.jpg\[aq]\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-include=\[aq]*.jpg\[aq]\f[]
|
|
|
|
|
.PP
|
|
|
|
|
In Windows the expansion is done by the command not the shell so this
|
|
|
|
|
should work fine
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-include\ *.jpg\f[]
|
|
|
|
|
.SH Overview of cloud storage systems
|
|
|
|
|
.PP
|
|
|
|
|
Each cloud storage system is slighly different.
|
|
|
|
|
Rclone attempts to provide a unified interface to them, but some
|
|
|
|
|
underlying differences show through.
|
|
|
|
|
.SS Features
|
|
|
|
|
.PP
|
|
|
|
|
Here is an overview of the major features of each cloud storage system.
|
|
|
|
|
.PP
|
|
|
|
|
.TS
|
|
|
|
|
tab(@);
|
|
|
|
|
l c c c c.
|
|
|
|
|
T{
|
|
|
|
|
Name
|
|
|
|
|
T}@T{
|
|
|
|
|
Hash
|
|
|
|
|
T}@T{
|
|
|
|
|
ModTime
|
|
|
|
|
T}@T{
|
|
|
|
|
Case Insensitive
|
|
|
|
|
T}@T{
|
|
|
|
|
Duplicate Files
|
|
|
|
|
T}
|
|
|
|
|
_
|
|
|
|
|
T{
|
|
|
|
|
Google Drive
|
|
|
|
|
T}@T{
|
|
|
|
|
MD5
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Amazon S3
|
|
|
|
|
T}@T{
|
|
|
|
|
MD5
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Openstack Swift
|
|
|
|
|
T}@T{
|
|
|
|
|
MD5
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Dropbox
|
|
|
|
|
T}@T{
|
|
|
|
|
\-
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Google Cloud Storage
|
|
|
|
|
T}@T{
|
|
|
|
|
MD5
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive
|
2016-02-07 21:26:10 +01:00
|
|
|
|
T}@T{
|
|
|
|
|
MD5
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Microsoft One Drive
|
|
|
|
|
T}@T{
|
|
|
|
|
SHA1
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Hubic
|
|
|
|
|
T}@T{
|
|
|
|
|
MD5
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Backblaze B2
|
|
|
|
|
T}@T{
|
|
|
|
|
SHA1
|
|
|
|
|
T}@T{
|
2016-04-18 19:30:29 +02:00
|
|
|
|
Yes
|
2016-02-07 21:26:10 +01:00
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
Yandex Disk
|
|
|
|
|
T}@T{
|
|
|
|
|
MD5
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
The local filesystem
|
|
|
|
|
T}@T{
|
|
|
|
|
All
|
|
|
|
|
T}@T{
|
|
|
|
|
Yes
|
|
|
|
|
T}@T{
|
|
|
|
|
Depends
|
|
|
|
|
T}@T{
|
|
|
|
|
No
|
|
|
|
|
T}
|
|
|
|
|
.TE
|
|
|
|
|
.SS Hash
|
|
|
|
|
.PP
|
|
|
|
|
The cloud storage system supports various hash types of the objects.
|
|
|
|
|
.PD 0
|
|
|
|
|
.P
|
|
|
|
|
.PD
|
|
|
|
|
The hashes are used when transferring data as an integrity check and can
|
|
|
|
|
be specifically used with the \f[C]\-\-checksum\f[] flag in syncs and in
|
|
|
|
|
the \f[C]check\f[] command.
|
|
|
|
|
.PP
|
|
|
|
|
To use the checksum checks between filesystems they must support a
|
|
|
|
|
common hash type.
|
|
|
|
|
.SS ModTime
|
|
|
|
|
.PP
|
|
|
|
|
The cloud storage system supports setting modification times on objects.
|
|
|
|
|
If it does then this enables a using the modification times as part of
|
|
|
|
|
the sync.
|
|
|
|
|
If not then only the size will be checked by default, though the MD5SUM
|
|
|
|
|
can be checked with the \f[C]\-\-checksum\f[] flag.
|
|
|
|
|
.PP
|
|
|
|
|
All cloud storage systems support some kind of date on the object and
|
|
|
|
|
these will be set when transferring from the cloud storage system.
|
|
|
|
|
.SS Case Insensitive
|
|
|
|
|
.PP
|
|
|
|
|
If a cloud storage systems is case sensitive then it is possible to have
|
|
|
|
|
two files which differ only in case, eg \f[C]file.txt\f[] and
|
|
|
|
|
\f[C]FILE.txt\f[].
|
|
|
|
|
If a cloud storage system is case insensitive then that isn\[aq]t
|
|
|
|
|
possible.
|
|
|
|
|
.PP
|
|
|
|
|
This can cause problems when syncing between a case insensitive system
|
|
|
|
|
and a case sensitive system.
|
|
|
|
|
The symptom of this is that no matter how many times you run the sync it
|
|
|
|
|
never completes fully.
|
|
|
|
|
.PP
|
|
|
|
|
The local filesystem may or may not be case sensitive depending on OS.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Windows \- usually case insensitive, though case is preserved
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
OSX \- usually case insensitive, though it is possible to format case
|
|
|
|
|
sensitive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Linux \- usually case sensitive, but there are case insensitive file
|
|
|
|
|
systems (eg FAT formatted USB keys)
|
|
|
|
|
.PP
|
|
|
|
|
Most of the time this doesn\[aq]t cause any problems as people tend to
|
|
|
|
|
avoid files whose name differs only by case even on case sensitive
|
|
|
|
|
systems.
|
|
|
|
|
.SS Duplicate files
|
|
|
|
|
.PP
|
|
|
|
|
If a cloud storage system allows duplicate files then it can have two
|
|
|
|
|
objects with the same name.
|
|
|
|
|
.PP
|
2016-04-18 19:30:29 +02:00
|
|
|
|
This confuses rclone greatly when syncing \- use the
|
|
|
|
|
\f[C]rclone\ dedupe\f[] command to rename or remove duplicates.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Google Drive
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]drive:path\f[]
|
|
|
|
|
.PP
|
|
|
|
|
Drive paths may be as deep as required, eg
|
|
|
|
|
\f[C]drive:directory/subdirectory\f[].
|
|
|
|
|
.PP
|
|
|
|
|
The initial setup for drive involves getting a token from Google drive
|
|
|
|
|
which you need to do in your browser.
|
|
|
|
|
\f[C]rclone\ config\f[] walks you through it.
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of how to make a remote called \f[C]remote\f[].
|
|
|
|
|
First run:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\ rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
n)\ New\ remote
|
|
|
|
|
d)\ Delete\ remote
|
|
|
|
|
q)\ Quit\ config
|
|
|
|
|
e/n/d/q>\ n
|
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 6
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_id>\
|
|
|
|
|
Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_secret>\
|
|
|
|
|
Remote\ config
|
|
|
|
|
Use\ auto\ config?
|
|
|
|
|
\ *\ Say\ Y\ if\ not\ sure
|
|
|
|
|
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine\ or\ Y\ didn\[aq]t\ work
|
|
|
|
|
y)\ Yes
|
|
|
|
|
n)\ No
|
|
|
|
|
y/n>\ y
|
|
|
|
|
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
|
|
|
|
|
Log\ in\ and\ authorize\ rclone\ for\ access
|
|
|
|
|
Waiting\ for\ code...
|
|
|
|
|
Got\ code
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
client_id\ =\
|
|
|
|
|
client_secret\ =\
|
|
|
|
|
token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null}
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Note that rclone runs a webserver on your local machine to collect the
|
|
|
|
|
token as returned from Google if you use auto config mode.
|
|
|
|
|
This only runs from the moment it opens your browser to the moment you
|
|
|
|
|
get back the verification code.
|
|
|
|
|
This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you
|
|
|
|
|
to unblock it temporarily if you are running a host firewall, or use
|
|
|
|
|
manual mode.
|
|
|
|
|
.PP
|
|
|
|
|
You can then use it like this,
|
|
|
|
|
.PP
|
|
|
|
|
List directories in top level of your drive
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List all the files in your drive
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
To copy a local directory to a drive directory called backup
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ /home/source\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
Google drive stores modification times accurate to 1 ms.
|
|
|
|
|
.SS Revisions
|
|
|
|
|
.PP
|
|
|
|
|
Google drive stores revisions of files.
|
|
|
|
|
When you upload a change to an existing file to google drive using
|
|
|
|
|
rclone it will create a new revision of that file.
|
|
|
|
|
.PP
|
|
|
|
|
Revisions follow the standard google policy which at time of writing was
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
They are deleted after 30 days or 100 revisions (whatever comes first).
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
They do not count towards a user storage quota.
|
|
|
|
|
.SS Deleting files
|
|
|
|
|
.PP
|
|
|
|
|
By default rclone will delete files permanently when requested.
|
|
|
|
|
If sending them to the trash is required instead then use the
|
|
|
|
|
\f[C]\-\-drive\-use\-trash\f[] flag.
|
|
|
|
|
.SS Specific options
|
|
|
|
|
.PP
|
|
|
|
|
Here are the command line options specific to this cloud storage system.
|
|
|
|
|
.SS \-\-drive\-chunk\-size=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
Upload chunk size.
|
|
|
|
|
Must a power of 2 >= 256k.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
Default value is 8 MB.
|
|
|
|
|
.PP
|
|
|
|
|
Making this larger will improve performance, but note that each chunk is
|
|
|
|
|
buffered in memory one per transfer.
|
|
|
|
|
.PP
|
|
|
|
|
Reducing this will reduce memory usage but decrease performance.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-\-drive\-full\-list
|
|
|
|
|
.PP
|
2016-04-18 19:30:29 +02:00
|
|
|
|
No longer does anything \- kept for backwards compatibility.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-\-drive\-upload\-cutoff=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
File size cutoff for switching to chunked upload.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
Default is 8 MB.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS \-\-drive\-use\-trash
|
|
|
|
|
.PP
|
|
|
|
|
Send files to the trash instead of deleting permanently.
|
|
|
|
|
Defaults to off, namely deleting files permanently.
|
|
|
|
|
.SS \-\-drive\-auth\-owner\-only
|
|
|
|
|
.PP
|
|
|
|
|
Only consider files owned by the authenticated user.
|
|
|
|
|
Requires that \-\-drive\-full\-list=true (default).
|
|
|
|
|
.SS \-\-drive\-formats
|
|
|
|
|
.PP
|
|
|
|
|
Google documents can only be exported from Google drive.
|
|
|
|
|
When rclone downloads a Google doc it chooses a format to download
|
|
|
|
|
depending upon this setting.
|
|
|
|
|
.PP
|
|
|
|
|
By default the formats are \f[C]docx,xlsx,pptx,svg\f[] which are a
|
|
|
|
|
sensible default for an editable document.
|
|
|
|
|
.PP
|
|
|
|
|
When choosing a format, rclone runs down the list provided in order and
|
|
|
|
|
chooses the first file format the doc can be exported as from the list.
|
|
|
|
|
If the file can\[aq]t be exported to a format on the formats list, then
|
|
|
|
|
rclone will choose a format from the default list.
|
|
|
|
|
.PP
|
|
|
|
|
If you prefer an archive copy then you might use
|
|
|
|
|
\f[C]\-\-drive\-formats\ pdf\f[], or if you prefer
|
|
|
|
|
openoffice/libreoffice formats you might use
|
|
|
|
|
\f[C]\-\-drive\-formats\ ods,odt\f[].
|
|
|
|
|
.PP
|
|
|
|
|
Note that rclone adds the extension to the google doc, so if it is
|
|
|
|
|
calles \f[C]My\ Spreadsheet\f[] on google docs, it will be exported as
|
|
|
|
|
\f[C]My\ Spreadsheet.xlsx\f[] or \f[C]My\ Spreadsheet.pdf\f[] etc.
|
|
|
|
|
.PP
|
|
|
|
|
Here are the possible extensions with their corresponding mime types.
|
|
|
|
|
.PP
|
|
|
|
|
.TS
|
|
|
|
|
tab(@);
|
2016-06-18 17:29:53 +02:00
|
|
|
|
lw(9.7n) lw(11.7n) lw(12.6n).
|
2016-02-07 21:26:10 +01:00
|
|
|
|
T{
|
|
|
|
|
Extension
|
|
|
|
|
T}@T{
|
|
|
|
|
Mime Type
|
|
|
|
|
T}@T{
|
|
|
|
|
Description
|
|
|
|
|
T}
|
|
|
|
|
_
|
|
|
|
|
T{
|
|
|
|
|
csv
|
|
|
|
|
T}@T{
|
|
|
|
|
text/csv
|
|
|
|
|
T}@T{
|
|
|
|
|
Standard CSV format for Spreadsheets
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
doc
|
|
|
|
|
T}@T{
|
|
|
|
|
application/msword
|
|
|
|
|
T}@T{
|
|
|
|
|
Micosoft Office Document
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
docx
|
|
|
|
|
T}@T{
|
|
|
|
|
application/vnd.openxmlformats\-officedocument.wordprocessingml.document
|
|
|
|
|
T}@T{
|
|
|
|
|
Microsoft Office Document
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
html
|
|
|
|
|
T}@T{
|
|
|
|
|
text/html
|
|
|
|
|
T}@T{
|
|
|
|
|
An HTML Document
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
jpg
|
|
|
|
|
T}@T{
|
|
|
|
|
image/jpeg
|
|
|
|
|
T}@T{
|
|
|
|
|
A JPEG Image File
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
ods
|
|
|
|
|
T}@T{
|
|
|
|
|
application/vnd.oasis.opendocument.spreadsheet
|
|
|
|
|
T}@T{
|
|
|
|
|
Openoffice Spreadsheet
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
ods
|
|
|
|
|
T}@T{
|
|
|
|
|
application/x\-vnd.oasis.opendocument.spreadsheet
|
|
|
|
|
T}@T{
|
|
|
|
|
Openoffice Spreadsheet
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
odt
|
|
|
|
|
T}@T{
|
|
|
|
|
application/vnd.oasis.opendocument.text
|
|
|
|
|
T}@T{
|
|
|
|
|
Openoffice Document
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
pdf
|
|
|
|
|
T}@T{
|
|
|
|
|
application/pdf
|
|
|
|
|
T}@T{
|
|
|
|
|
Adobe PDF Format
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
png
|
|
|
|
|
T}@T{
|
|
|
|
|
image/png
|
|
|
|
|
T}@T{
|
|
|
|
|
PNG Image Format
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
pptx
|
|
|
|
|
T}@T{
|
|
|
|
|
application/vnd.openxmlformats\-officedocument.presentationml.presentation
|
|
|
|
|
T}@T{
|
|
|
|
|
Microsoft Office Powerpoint
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
rtf
|
|
|
|
|
T}@T{
|
|
|
|
|
application/rtf
|
|
|
|
|
T}@T{
|
|
|
|
|
Rich Text Format
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
svg
|
|
|
|
|
T}@T{
|
|
|
|
|
image/svg+xml
|
|
|
|
|
T}@T{
|
|
|
|
|
Scalable Vector Graphics Format
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
txt
|
|
|
|
|
T}@T{
|
|
|
|
|
text/plain
|
|
|
|
|
T}@T{
|
|
|
|
|
Plain Text
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
xls
|
|
|
|
|
T}@T{
|
|
|
|
|
application/vnd.ms\-excel
|
|
|
|
|
T}@T{
|
|
|
|
|
Microsoft Office Spreadsheet
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
xlsx
|
|
|
|
|
T}@T{
|
|
|
|
|
application/vnd.openxmlformats\-officedocument.spreadsheetml.sheet
|
|
|
|
|
T}@T{
|
|
|
|
|
Microsoft Office Spreadsheet
|
|
|
|
|
T}
|
|
|
|
|
T{
|
|
|
|
|
zip
|
|
|
|
|
T}@T{
|
|
|
|
|
application/zip
|
|
|
|
|
T}@T{
|
|
|
|
|
A ZIP file of HTML, Images CSS
|
|
|
|
|
T}
|
|
|
|
|
.TE
|
|
|
|
|
.SS Limitations
|
|
|
|
|
.PP
|
|
|
|
|
Drive has quite a lot of rate limiting.
|
|
|
|
|
This causes rclone to be limited to transferring about 2 files per
|
|
|
|
|
second only.
|
|
|
|
|
Individual files may be transferred much faster at 100s of MBytes/s but
|
|
|
|
|
lots of small files can take a long time.
|
|
|
|
|
.SS Amazon S3
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for
|
|
|
|
|
the \f[C]lsd\f[] command.) You may put subdirectories in too, eg
|
|
|
|
|
\f[C]remote:bucket/path/to/dir\f[].
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of making an s3 configuration.
|
|
|
|
|
First run
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
No\ remotes\ found\ \-\ make\ a\ new\ one
|
|
|
|
|
n)\ New\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
s)\ Set\ configuration\ password
|
|
|
|
|
n/s>\ n
|
2016-02-07 21:26:10 +01:00
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 2
|
|
|
|
|
Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step
|
|
|
|
|
\ \ \ \\\ "false"
|
|
|
|
|
\ 2\ /\ Get\ AWS\ credentials\ from\ the\ environment\ (env\ vars\ or\ IAM)
|
|
|
|
|
\ \ \ \\\ "true"
|
|
|
|
|
env_auth>\ 1
|
|
|
|
|
AWS\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
|
|
|
|
|
access_key_id>\ access_key
|
|
|
|
|
AWS\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
|
|
|
|
|
secret_access_key>\ secret_key
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Region\ to\ connect\ to.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
2016-03-01 10:00:01 +01:00
|
|
|
|
\ \ \ /\ The\ default\ endpoint\ \-\ a\ good\ choice\ if\ you\ are\ unsure.
|
|
|
|
|
\ 1\ |\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest.
|
|
|
|
|
\ \ \ |\ Leave\ location\ constraint\ empty.
|
|
|
|
|
\ \ \ \\\ "us\-east\-1"
|
|
|
|
|
\ \ \ /\ US\ West\ (Oregon)\ Region
|
|
|
|
|
\ 2\ |\ Needs\ location\ constraint\ us\-west\-2.
|
|
|
|
|
\ \ \ \\\ "us\-west\-2"
|
|
|
|
|
\ \ \ /\ US\ West\ (Northern\ California)\ Region
|
|
|
|
|
\ 3\ |\ Needs\ location\ constraint\ us\-west\-1.
|
|
|
|
|
\ \ \ \\\ "us\-west\-1"
|
|
|
|
|
\ \ \ /\ EU\ (Ireland)\ Region\ Region
|
|
|
|
|
\ 4\ |\ Needs\ location\ constraint\ EU\ or\ eu\-west\-1.
|
|
|
|
|
\ \ \ \\\ "eu\-west\-1"
|
|
|
|
|
\ \ \ /\ EU\ (Frankfurt)\ Region
|
|
|
|
|
\ 5\ |\ Needs\ location\ constraint\ eu\-central\-1.
|
|
|
|
|
\ \ \ \\\ "eu\-central\-1"
|
|
|
|
|
\ \ \ /\ Asia\ Pacific\ (Singapore)\ Region
|
|
|
|
|
\ 6\ |\ Needs\ location\ constraint\ ap\-southeast\-1.
|
|
|
|
|
\ \ \ \\\ "ap\-southeast\-1"
|
|
|
|
|
\ \ \ /\ Asia\ Pacific\ (Sydney)\ Region
|
|
|
|
|
\ 7\ |\ Needs\ location\ constraint\ ap\-southeast\-2.
|
|
|
|
|
\ \ \ \\\ "ap\-southeast\-2"
|
|
|
|
|
\ \ \ /\ Asia\ Pacific\ (Tokyo)\ Region
|
|
|
|
|
\ 8\ |\ Needs\ location\ constraint\ ap\-northeast\-1.
|
|
|
|
|
\ \ \ \\\ "ap\-northeast\-1"
|
|
|
|
|
\ \ \ /\ South\ America\ (Sao\ Paulo)\ Region
|
|
|
|
|
\ 9\ |\ Needs\ location\ constraint\ sa\-east\-1.
|
|
|
|
|
\ \ \ \\\ "sa\-east\-1"
|
|
|
|
|
\ \ \ /\ If\ using\ an\ S3\ clone\ that\ only\ understands\ v2\ signatures
|
|
|
|
|
10\ |\ eg\ Ceph/Dreamhost
|
|
|
|
|
\ \ \ |\ set\ this\ and\ make\ sure\ you\ set\ the\ endpoint.
|
|
|
|
|
\ \ \ \\\ "other\-v2\-signature"
|
|
|
|
|
\ \ \ /\ If\ using\ an\ S3\ clone\ that\ understands\ v4\ signatures\ set\ this
|
|
|
|
|
11\ |\ and\ make\ sure\ you\ set\ the\ endpoint.
|
|
|
|
|
\ \ \ \\\ "other\-v4\-signature"
|
2016-02-07 21:26:10 +01:00
|
|
|
|
region>\ 1
|
|
|
|
|
Endpoint\ for\ S3\ API.
|
|
|
|
|
Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region.
|
|
|
|
|
Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph.
|
|
|
|
|
endpoint>\
|
|
|
|
|
Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
2016-03-01 10:00:01 +01:00
|
|
|
|
\ 1\ /\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest.
|
|
|
|
|
\ \ \ \\\ ""
|
|
|
|
|
\ 2\ /\ US\ West\ (Oregon)\ Region.
|
|
|
|
|
\ \ \ \\\ "us\-west\-2"
|
|
|
|
|
\ 3\ /\ US\ West\ (Northern\ California)\ Region.
|
|
|
|
|
\ \ \ \\\ "us\-west\-1"
|
|
|
|
|
\ 4\ /\ EU\ (Ireland)\ Region.
|
|
|
|
|
\ \ \ \\\ "eu\-west\-1"
|
|
|
|
|
\ 5\ /\ EU\ Region.
|
|
|
|
|
\ \ \ \\\ "EU"
|
|
|
|
|
\ 6\ /\ Asia\ Pacific\ (Singapore)\ Region.
|
|
|
|
|
\ \ \ \\\ "ap\-southeast\-1"
|
|
|
|
|
\ 7\ /\ Asia\ Pacific\ (Sydney)\ Region.
|
|
|
|
|
\ \ \ \\\ "ap\-southeast\-2"
|
|
|
|
|
\ 8\ /\ Asia\ Pacific\ (Tokyo)\ Region.
|
|
|
|
|
\ \ \ \\\ "ap\-northeast\-1"
|
|
|
|
|
\ 9\ /\ South\ America\ (Sao\ Paulo)\ Region.
|
|
|
|
|
\ \ \ \\\ "sa\-east\-1"
|
2016-02-07 21:26:10 +01:00
|
|
|
|
location_constraint>\ 1
|
2016-06-18 17:29:53 +02:00
|
|
|
|
The\ server\-side\ encryption\ algorithm\ used\ when\ storing\ this\ object\ in\ S3.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ None
|
|
|
|
|
\ \ \ \\\ ""
|
|
|
|
|
\ 2\ /\ AES256
|
|
|
|
|
\ \ \ \\\ "AES256"
|
|
|
|
|
server_side_encryption>
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Remote\ config
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
2016-03-01 10:00:01 +01:00
|
|
|
|
env_auth\ =\ false
|
|
|
|
|
access_key_id\ =\ access_key
|
|
|
|
|
secret_access_key\ =\ secret_key
|
2016-02-07 21:26:10 +01:00
|
|
|
|
region\ =\ us\-east\-1
|
|
|
|
|
endpoint\ =\
|
|
|
|
|
location_constraint\ =\
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This remote is called \f[C]remote\f[] and can now be used like this
|
|
|
|
|
.PP
|
|
|
|
|
See all buckets
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Make a new bucket
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ mkdir\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List the contents of a bucket
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any
|
|
|
|
|
excess files in the bucket.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ /home/local/directory\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
The modified time is stored as metadata on the object as
|
|
|
|
|
\f[C]X\-Amz\-Meta\-Mtime\f[] as floating point since the epoch accurate
|
|
|
|
|
to 1 ns.
|
|
|
|
|
.SS Multipart uploads
|
|
|
|
|
.PP
|
|
|
|
|
rclone supports multipart uploads with S3 which means that it can upload
|
|
|
|
|
files bigger than 5GB.
|
|
|
|
|
Note that files uploaded with multipart upload don\[aq]t have an MD5SUM.
|
|
|
|
|
.SS Buckets and Regions
|
|
|
|
|
.PP
|
|
|
|
|
With Amazon S3 you can list buckets (\f[C]rclone\ lsd\f[]) using any
|
|
|
|
|
region, but you can only access the content of a bucket from the region
|
|
|
|
|
it was created in.
|
|
|
|
|
If you attempt to access a bucket from the wrong region, you will get an
|
|
|
|
|
error,
|
|
|
|
|
\f[C]incorrect\ region,\ the\ bucket\ is\ not\ in\ \[aq]XXX\[aq]\ region\f[].
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.SS Authentication
|
|
|
|
|
.PP
|
|
|
|
|
There are two ways to supply \f[C]rclone\f[] with a set of AWS
|
|
|
|
|
credentials.
|
|
|
|
|
In order of precedence:
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Directly in the rclone configuration file (as configured by
|
|
|
|
|
\f[C]rclone\ config\f[])
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
set \f[C]access_key_id\f[] and \f[C]secret_access_key\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Runtime configuration:
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
set \f[C]env_auth\f[] to \f[C]true\f[] in the config file
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Exporting the following environment variables before running
|
|
|
|
|
\f[C]rclone\f[]
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Access Key ID: \f[C]AWS_ACCESS_KEY_ID\f[] or \f[C]AWS_ACCESS_KEY\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Secret Access Key: \f[C]AWS_SECRET_ACCESS_KEY\f[] or
|
|
|
|
|
\f[C]AWS_SECRET_KEY\f[]
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Running \f[C]rclone\f[] on an EC2 instance with an IAM role
|
|
|
|
|
.PP
|
|
|
|
|
If none of these option actually end up providing \f[C]rclone\f[] with
|
|
|
|
|
AWS credentials then S3 interaction will be non\-authenticated (see
|
|
|
|
|
below).
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Anonymous access to public buckets
|
|
|
|
|
.PP
|
|
|
|
|
If you want to use rclone to access a public bucket, configure with a
|
|
|
|
|
blank \f[C]access_key_id\f[] and \f[C]secret_access_key\f[].
|
|
|
|
|
Eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
2016-03-01 10:00:01 +01:00
|
|
|
|
No\ remotes\ found\ \-\ make\ a\ new\ one
|
2016-02-07 21:26:10 +01:00
|
|
|
|
n)\ New\ remote
|
|
|
|
|
q)\ Quit\ config
|
2016-03-01 10:00:01 +01:00
|
|
|
|
n/q>\ n
|
2016-02-07 21:26:10 +01:00
|
|
|
|
name>\ anons3
|
|
|
|
|
What\ type\ of\ source\ is\ it?
|
|
|
|
|
Choose\ a\ number\ from\ below
|
|
|
|
|
\ 1)\ amazon\ cloud\ drive
|
2016-03-01 10:00:01 +01:00
|
|
|
|
\ 2)\ b2
|
|
|
|
|
\ 3)\ drive
|
|
|
|
|
\ 4)\ dropbox
|
|
|
|
|
\ 5)\ google\ cloud\ storage
|
|
|
|
|
\ 6)\ swift
|
|
|
|
|
\ 7)\ hubic
|
|
|
|
|
\ 8)\ local
|
|
|
|
|
\ 9)\ onedrive
|
|
|
|
|
10)\ s3
|
|
|
|
|
11)\ yandex
|
|
|
|
|
type>\ 10
|
|
|
|
|
Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ *\ Enter\ AWS\ credentials\ in\ the\ next\ step
|
|
|
|
|
\ 1)\ false
|
|
|
|
|
\ *\ Get\ AWS\ credentials\ from\ the\ environment\ (env\ vars\ or\ IAM)
|
|
|
|
|
\ 2)\ true
|
|
|
|
|
env_auth>\ 1
|
|
|
|
|
AWS\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
|
|
|
|
|
access_key_id>
|
|
|
|
|
AWS\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
|
|
|
|
|
secret_access_key>
|
|
|
|
|
\&...
|
2016-02-07 21:26:10 +01:00
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Then use it as normal with the name of the public bucket, eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ anons3:1000genomes
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
You will be able to list and copy data but not upload it.
|
|
|
|
|
.SS Ceph
|
|
|
|
|
.PP
|
|
|
|
|
Ceph is an object storage system which presents an Amazon S3 interface.
|
|
|
|
|
.PP
|
|
|
|
|
To use rclone with ceph, you need to set the following parameters in the
|
|
|
|
|
config.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
access_key_id\ =\ Whatever
|
|
|
|
|
secret_access_key\ =\ Whatever
|
|
|
|
|
endpoint\ =\ https://ceph.endpoint.goes.here/
|
|
|
|
|
region\ =\ other\-v2\-signature
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Note also that Ceph sometimes puts \f[C]/\f[] in the passwords it gives
|
|
|
|
|
users.
|
|
|
|
|
If you read the secret access key using the command line tools you will
|
|
|
|
|
get a JSON blob with the \f[C]/\f[] escaped as \f[C]\\/\f[].
|
|
|
|
|
Make sure you only write \f[C]/\f[] in the secret access key.
|
|
|
|
|
.PP
|
|
|
|
|
Eg the dump from Ceph looks something like this (irrelevant keys
|
|
|
|
|
removed).
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
{
|
|
|
|
|
\ \ \ \ "user_id":\ "xxx",
|
|
|
|
|
\ \ \ \ "display_name":\ "xxxx",
|
|
|
|
|
\ \ \ \ "keys":\ [
|
|
|
|
|
\ \ \ \ \ \ \ \ {
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \ \ "user":\ "xxx",
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \ \ "access_key":\ "xxxxxx",
|
|
|
|
|
\ \ \ \ \ \ \ \ \ \ \ \ "secret_key":\ "xxxxxx\\/xxxx"
|
|
|
|
|
\ \ \ \ \ \ \ \ }
|
|
|
|
|
\ \ \ \ ],
|
|
|
|
|
}
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Because this is a json dump, it is encoding the \f[C]/\f[] as
|
|
|
|
|
\f[C]\\/\f[], so if you use the secret key as \f[C]xxxxxx/xxxx\f[] it
|
|
|
|
|
will work fine.
|
|
|
|
|
.SS Swift
|
|
|
|
|
.PP
|
|
|
|
|
Swift refers to Openstack Object
|
|
|
|
|
Storage (http://www.openstack.org/software/openstack-storage/).
|
|
|
|
|
Commercial implementations of that being:
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Rackspace Cloud Files (http://www.rackspace.com/cloud/files/)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Memset Memstore (http://www.memset.com/cloud/storage/)
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[]
|
|
|
|
|
for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg
|
|
|
|
|
\f[C]remote:container/path/to/dir\f[].
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of making a swift configuration.
|
|
|
|
|
First run
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
No\ remotes\ found\ \-\ make\ a\ new\ one
|
|
|
|
|
n)\ New\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
s)\ Set\ configuration\ password
|
|
|
|
|
n/s>\ n
|
2016-02-07 21:26:10 +01:00
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 10
|
2016-02-07 21:26:10 +01:00
|
|
|
|
User\ name\ to\ log\ in.
|
|
|
|
|
user>\ user_name
|
|
|
|
|
API\ key\ or\ password.
|
|
|
|
|
key>\ password_or_api_key
|
|
|
|
|
Authentication\ URL\ for\ server.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
2016-03-01 10:00:01 +01:00
|
|
|
|
\ 1\ /\ Rackspace\ US
|
|
|
|
|
\ \ \ \\\ "https://auth.api.rackspacecloud.com/v1.0"
|
|
|
|
|
\ 2\ /\ Rackspace\ UK
|
|
|
|
|
\ \ \ \\\ "https://lon.auth.api.rackspacecloud.com/v1.0"
|
|
|
|
|
\ 3\ /\ Rackspace\ v2
|
|
|
|
|
\ \ \ \\\ "https://identity.api.rackspacecloud.com/v2.0"
|
|
|
|
|
\ 4\ /\ Memset\ Memstore\ UK
|
|
|
|
|
\ \ \ \\\ "https://auth.storage.memset.com/v1.0"
|
|
|
|
|
\ 5\ /\ Memset\ Memstore\ UK\ v2
|
|
|
|
|
\ \ \ \\\ "https://auth.storage.memset.com/v2.0"
|
|
|
|
|
\ 6\ /\ OVH
|
|
|
|
|
\ \ \ \\\ "https://auth.cloud.ovh.net/v2.0"
|
2016-02-07 21:26:10 +01:00
|
|
|
|
auth>\ 1
|
2016-06-18 17:29:53 +02:00
|
|
|
|
User\ domain\ \-\ optional\ (v3\ auth)
|
|
|
|
|
domain>\ Default
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Tenant\ name\ \-\ optional
|
2016-03-01 10:00:01 +01:00
|
|
|
|
tenant>\
|
|
|
|
|
Region\ name\ \-\ optional
|
|
|
|
|
region>\
|
|
|
|
|
Storage\ URL\ \-\ optional
|
|
|
|
|
storage_url>\
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Remote\ config
|
2016-06-18 17:29:53 +02:00
|
|
|
|
AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version
|
|
|
|
|
auth_version>\
|
2016-02-07 21:26:10 +01:00
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
user\ =\ user_name
|
|
|
|
|
key\ =\ password_or_api_key
|
|
|
|
|
auth\ =\ https://auth.api.rackspacecloud.com/v1.0
|
2016-03-01 10:00:01 +01:00
|
|
|
|
tenant\ =\
|
|
|
|
|
region\ =\
|
|
|
|
|
storage_url\ =\
|
2016-02-07 21:26:10 +01:00
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This remote is called \f[C]remote\f[] and can now be used like this
|
|
|
|
|
.PP
|
|
|
|
|
See all containers
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Make a new container
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ mkdir\ remote:container
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List the contents of a container
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:container
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Sync \f[C]/home/local/directory\f[] to the remote container, deleting
|
|
|
|
|
any excess files in the container.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ /home/local/directory\ remote:container
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Specific options
|
|
|
|
|
.PP
|
|
|
|
|
Here are the command line options specific to this cloud storage system.
|
|
|
|
|
.SS \-\-swift\-chunk\-size=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
Above this size files will be chunked into a _segments container.
|
|
|
|
|
The default for this is 5GB which is its maximum value.
|
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
The modified time is stored as metadata on the object as
|
|
|
|
|
\f[C]X\-Object\-Meta\-Mtime\f[] as floating point since the epoch
|
|
|
|
|
accurate to 1 ns.
|
|
|
|
|
.PP
|
|
|
|
|
This is a defacto standard (used in the official python\-swiftclient
|
|
|
|
|
amongst others) for storing the modification time for an object.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.SS Limitations
|
|
|
|
|
.PP
|
|
|
|
|
The Swift API doesn\[aq]t return a correct MD5SUM for segmented files
|
|
|
|
|
(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the
|
|
|
|
|
MD5SUM for these.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.SS Troubleshooting
|
|
|
|
|
.SS Rclone gives Failed to create file system for "remote:": Bad Request
|
|
|
|
|
.PP
|
|
|
|
|
Due to an oddity of the underlying swift library, it gives a "Bad
|
|
|
|
|
Request" error rather than a more sensible error when the authentication
|
|
|
|
|
fails for Swift.
|
|
|
|
|
.PP
|
|
|
|
|
So this most likely means your username / password is wrong.
|
|
|
|
|
You can investigate further with the \f[C]\-\-dump\-bodies\f[] flag.
|
|
|
|
|
.SS Rclone gives Failed to create file system: Response didn\[aq]t have
|
|
|
|
|
storage storage url and auth token
|
|
|
|
|
.PP
|
|
|
|
|
This is most likely caused by forgetting to specify your tenant when
|
|
|
|
|
setting up a swift remote.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Dropbox
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:path\f[]
|
|
|
|
|
.PP
|
|
|
|
|
Dropbox paths may be as deep as required, eg
|
|
|
|
|
\f[C]remote:directory/subdirectory\f[].
|
|
|
|
|
.PP
|
|
|
|
|
The initial setup for dropbox involves getting a token from Dropbox
|
|
|
|
|
which you need to do in your browser.
|
|
|
|
|
\f[C]rclone\ config\f[] walks you through it.
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of how to make a remote called \f[C]remote\f[].
|
|
|
|
|
First run:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\ rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
n)\ New\ remote
|
|
|
|
|
d)\ Delete\ remote
|
|
|
|
|
q)\ Quit\ config
|
|
|
|
|
e/n/d/q>\ n
|
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 4
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Dropbox\ App\ Key\ \-\ leave\ blank\ normally.
|
|
|
|
|
app_key>\
|
|
|
|
|
Dropbox\ App\ Secret\ \-\ leave\ blank\ normally.
|
|
|
|
|
app_secret>\
|
|
|
|
|
Remote\ config
|
|
|
|
|
Please\ visit:
|
|
|
|
|
https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
|
|
|
|
|
Enter\ the\ code:\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
app_key\ =\
|
|
|
|
|
app_secret\ =\
|
|
|
|
|
token\ =\ XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
You can then use it like this,
|
|
|
|
|
.PP
|
|
|
|
|
List directories in top level of your dropbox
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List all the files in your dropbox
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
To copy a local directory to a dropbox directory called backup
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ /home/source\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Modified time and MD5SUMs
|
|
|
|
|
.PP
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Dropbox doesn\[aq]t provide the ability to set modification times in the
|
|
|
|
|
V1 public API, so rclone can\[aq]t support modified time with Dropbox.
|
|
|
|
|
.PP
|
|
|
|
|
This may change in the future \- see these issues for details:
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Dropbox V2 API (https://github.com/ncw/rclone/issues/349)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Allow syncs for remotes that can\[aq]t set modtime on existing
|
|
|
|
|
objects (https://github.com/ncw/rclone/issues/348)
|
|
|
|
|
.PP
|
|
|
|
|
Dropbox doesn\[aq]t return any sort of checksum (MD5 or SHA1).
|
|
|
|
|
.PP
|
|
|
|
|
Together that means that syncs to dropbox will effectively have the
|
|
|
|
|
\f[C]\-\-size\-only\f[] flag set.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Specific options
|
|
|
|
|
.PP
|
|
|
|
|
Here are the command line options specific to this cloud storage system.
|
|
|
|
|
.SS \-\-dropbox\-chunk\-size=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
Upload chunk size.
|
|
|
|
|
Max 150M.
|
|
|
|
|
The default is 128MB.
|
|
|
|
|
Note that this isn\[aq]t buffered into memory.
|
|
|
|
|
.SS Limitations
|
|
|
|
|
.PP
|
|
|
|
|
Note that Dropbox is case insensitive so you can\[aq]t have a file
|
|
|
|
|
called "Hello.doc" and one called "hello.doc".
|
|
|
|
|
.PP
|
|
|
|
|
There are some file names such as \f[C]thumbs.db\f[] which Dropbox
|
|
|
|
|
can\[aq]t store.
|
|
|
|
|
There is a full list of them in the "Ignored Files" section of this
|
|
|
|
|
document (https://www.dropbox.com/en/help/145).
|
|
|
|
|
Rclone will issue an error message
|
|
|
|
|
\f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempt to
|
|
|
|
|
upload one of those file names, but the sync won\[aq]t fail.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.PP
|
|
|
|
|
If you have more than 10,000 files in a directory then
|
|
|
|
|
\f[C]rclone\ purge\ dropbox:dir\f[] will return the error
|
|
|
|
|
\f[C]Failed\ to\ purge:\ There\ are\ too\ many\ files\ involved\ in\ this\ operation\f[].
|
|
|
|
|
As a work\-around do an \f[C]rclone\ delete\ dropbix:dir\f[] followed by
|
|
|
|
|
an \f[C]rclone\ rmdir\ dropbox:dir\f[].
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Google Cloud Storage
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for
|
|
|
|
|
the \f[C]lsd\f[] command.) You may put subdirectories in too, eg
|
|
|
|
|
\f[C]remote:bucket/path/to/dir\f[].
|
|
|
|
|
.PP
|
|
|
|
|
The initial setup for google cloud storage involves getting a token from
|
|
|
|
|
Google Cloud Storage which you need to do in your browser.
|
|
|
|
|
\f[C]rclone\ config\f[] walks you through it.
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of how to make a remote called \f[C]remote\f[].
|
|
|
|
|
First run:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\ rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
n)\ New\ remote
|
|
|
|
|
d)\ Delete\ remote
|
|
|
|
|
q)\ Quit\ config
|
|
|
|
|
e/n/d/q>\ n
|
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 5
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_id>\
|
|
|
|
|
Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_secret>\
|
|
|
|
|
Project\ number\ optional\ \-\ needed\ only\ for\ list/create/delete\ buckets\ \-\ see\ your\ developer\ console.
|
|
|
|
|
project_number>\ 12345678
|
2016-06-18 17:29:53 +02:00
|
|
|
|
Service\ Account\ Credentials\ JSON\ file\ path\ \-\ needed\ only\ if\ you\ want\ use\ SA\ instead\ of\ interactive\ login.
|
|
|
|
|
service_account_file>\
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Access\ Control\ List\ for\ new\ objects.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access.
|
|
|
|
|
\ 1)\ authenticatedRead
|
|
|
|
|
\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ OWNER\ access.
|
|
|
|
|
\ 2)\ bucketOwnerFullControl
|
|
|
|
|
\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ owners\ get\ READER\ access.
|
|
|
|
|
\ 3)\ bucketOwnerRead
|
|
|
|
|
\ *\ Object\ owner\ gets\ OWNER\ access\ [default\ if\ left\ blank].
|
|
|
|
|
\ 4)\ private
|
|
|
|
|
\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ project\ team\ members\ get\ access\ according\ to\ their\ roles.
|
|
|
|
|
\ 5)\ projectPrivate
|
|
|
|
|
\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access.
|
|
|
|
|
\ 6)\ publicRead
|
|
|
|
|
object_acl>\ 4
|
|
|
|
|
Access\ Control\ List\ for\ new\ buckets.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access.
|
|
|
|
|
\ 1)\ authenticatedRead
|
|
|
|
|
\ *\ Project\ team\ owners\ get\ OWNER\ access\ [default\ if\ left\ blank].
|
|
|
|
|
\ 2)\ private
|
|
|
|
|
\ *\ Project\ team\ members\ get\ access\ according\ to\ their\ roles.
|
|
|
|
|
\ 3)\ projectPrivate
|
|
|
|
|
\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ READER\ access.
|
|
|
|
|
\ 4)\ publicRead
|
|
|
|
|
\ *\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ WRITER\ access.
|
|
|
|
|
\ 5)\ publicReadWrite
|
|
|
|
|
bucket_acl>\ 2
|
|
|
|
|
Remote\ config
|
|
|
|
|
Remote\ config
|
|
|
|
|
Use\ auto\ config?
|
|
|
|
|
\ *\ Say\ Y\ if\ not\ sure
|
|
|
|
|
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine\ or\ Y\ didn\[aq]t\ work
|
|
|
|
|
y)\ Yes
|
|
|
|
|
n)\ No
|
|
|
|
|
y/n>\ y
|
|
|
|
|
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
|
|
|
|
|
Log\ in\ and\ authorize\ rclone\ for\ access
|
|
|
|
|
Waiting\ for\ code...
|
|
|
|
|
Got\ code
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
type\ =\ google\ cloud\ storage
|
|
|
|
|
client_id\ =\
|
|
|
|
|
client_secret\ =\
|
|
|
|
|
token\ =\ {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014\-07\-17T20:49:14.929208288+01:00","Extra":null}
|
|
|
|
|
project_number\ =\ 12345678
|
|
|
|
|
object_acl\ =\ private
|
|
|
|
|
bucket_acl\ =\ private
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Note that rclone runs a webserver on your local machine to collect the
|
|
|
|
|
token as returned from Google if you use auto config mode.
|
|
|
|
|
This only runs from the moment it opens your browser to the moment you
|
|
|
|
|
get back the verification code.
|
|
|
|
|
This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you
|
|
|
|
|
to unblock it temporarily if you are running a host firewall, or use
|
|
|
|
|
manual mode.
|
|
|
|
|
.PP
|
|
|
|
|
This remote is called \f[C]remote\f[] and can now be used like this
|
|
|
|
|
.PP
|
|
|
|
|
See all the buckets in your project
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Make a new bucket
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ mkdir\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List the contents of a bucket
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any
|
|
|
|
|
excess files in the bucket.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ /home/local/directory\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.SS Service Account support
|
|
|
|
|
.PP
|
|
|
|
|
You can set up rclone with Google Cloud Storage in an unattended mode,
|
|
|
|
|
i.e.
|
|
|
|
|
not tied to a specific end\-user Google account.
|
|
|
|
|
This is useful when you want to synchronise files onto machines that
|
|
|
|
|
don\[aq]t have actively logged\-in users, for example build machines.
|
|
|
|
|
.PP
|
|
|
|
|
To get credentials for Google Cloud Platform IAM Service
|
|
|
|
|
Accounts (https://cloud.google.com/iam/docs/service-accounts), please
|
|
|
|
|
head to the Service
|
|
|
|
|
Account (https://console.cloud.google.com/permissions/serviceaccounts)
|
|
|
|
|
section of the Google Developer Console.
|
|
|
|
|
Service Accounts behave just like normal \f[C]User\f[] permissions in
|
|
|
|
|
Google Cloud Storage
|
|
|
|
|
ACLs (https://cloud.google.com/storage/docs/access-control), so you can
|
|
|
|
|
limit their access (e.g.
|
|
|
|
|
make them read only).
|
|
|
|
|
After creating an account, a JSON file containing the Service
|
|
|
|
|
Account\[aq]s credentials will be downloaded onto your machines.
|
|
|
|
|
These credentials are what rclone will use for authentication.
|
|
|
|
|
.PP
|
|
|
|
|
To use a Service Account instead of OAuth2 token flow, enter the path to
|
|
|
|
|
your Service Account credentials at the \f[C]service_account_file\f[]
|
|
|
|
|
prompt and rclone won\[aq]t use the browser based authentication flow.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
Google google cloud storage stores md5sums natively and rclone stores
|
|
|
|
|
modification times as metadata on the object, under the "mtime" key in
|
|
|
|
|
RFC3339 format accurate to 1ns.
|
2016-07-11 13:42:44 +02:00
|
|
|
|
.SS Amazon Drive
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:path\f[]
|
|
|
|
|
.PP
|
|
|
|
|
Paths may be as deep as required, eg
|
|
|
|
|
\f[C]remote:directory/subdirectory\f[].
|
|
|
|
|
.PP
|
|
|
|
|
The initial setup for Amazon cloud drive involves getting a token from
|
|
|
|
|
Amazon which you need to do in your browser.
|
|
|
|
|
\f[C]rclone\ config\f[] walks you through it.
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of how to make a remote called \f[C]remote\f[].
|
|
|
|
|
First run:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\ rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
n)\ New\ remote
|
|
|
|
|
d)\ Delete\ remote
|
|
|
|
|
q)\ Quit\ config
|
|
|
|
|
e/n/d/q>\ n
|
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 1
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Amazon\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_id>\
|
|
|
|
|
Amazon\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_secret>\
|
|
|
|
|
Remote\ config
|
|
|
|
|
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
|
|
|
|
|
Log\ in\ and\ authorize\ rclone\ for\ access
|
|
|
|
|
Waiting\ for\ code...
|
|
|
|
|
Got\ code
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
client_id\ =\
|
|
|
|
|
client_secret\ =\
|
|
|
|
|
token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015\-09\-06T16:07:39.658438471+01:00"}
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
See the remote setup docs (http://rclone.org/remote_setup/) for how to
|
|
|
|
|
set it up on a machine with no Internet browser available.
|
|
|
|
|
.PP
|
|
|
|
|
Note that rclone runs a webserver on your local machine to collect the
|
|
|
|
|
token as returned from Amazon.
|
|
|
|
|
This only runs from the moment it opens your browser to the moment you
|
|
|
|
|
get back the verification code.
|
|
|
|
|
This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you
|
|
|
|
|
to unblock it temporarily if you are running a host firewall.
|
|
|
|
|
.PP
|
|
|
|
|
Once configured you can then use \f[C]rclone\f[] like this,
|
|
|
|
|
.PP
|
|
|
|
|
List directories in top level of your Amazon cloud drive
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List all the files in your Amazon cloud drive
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
To copy a local directory to an Amazon cloud drive directory called
|
|
|
|
|
backup
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ /home/source\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Modified time and MD5SUMs
|
|
|
|
|
.PP
|
|
|
|
|
Amazon cloud drive doesn\[aq]t allow modification times to be changed
|
|
|
|
|
via the API so these won\[aq]t be accurate or used for syncing.
|
|
|
|
|
.PP
|
|
|
|
|
It does store MD5SUMs so for a more accurate sync, you can use the
|
|
|
|
|
\f[C]\-\-checksum\f[] flag.
|
|
|
|
|
.SS Deleting files
|
|
|
|
|
.PP
|
|
|
|
|
Any files you delete with rclone will end up in the trash.
|
|
|
|
|
Amazon don\[aq]t provide an API to permanently delete files, nor to
|
|
|
|
|
empty the trash, so you will have to do that with one of Amazon\[aq]s
|
|
|
|
|
apps or via the Amazon cloud drive website.
|
|
|
|
|
.SS Specific options
|
|
|
|
|
.PP
|
|
|
|
|
Here are the command line options specific to this cloud storage system.
|
|
|
|
|
.SS \-\-acd\-templink\-threshold=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
Files this size or more will be downloaded via their \f[C]tempLink\f[].
|
2016-07-11 13:42:44 +02:00
|
|
|
|
This is to work around a problem with Amazon Drive which blocks
|
2016-02-07 21:26:10 +01:00
|
|
|
|
downloads of files bigger than about 10GB.
|
|
|
|
|
The default for this is 9GB which shouldn\[aq]t need to be changed.
|
|
|
|
|
.PP
|
|
|
|
|
To download files above this threshold, rclone requests a
|
|
|
|
|
\f[C]tempLink\f[] which downloads the file through a temporary URL
|
|
|
|
|
directly from the underlying S3 storage.
|
|
|
|
|
.SS Limitations
|
|
|
|
|
.PP
|
|
|
|
|
Note that Amazon cloud drive is case insensitive so you can\[aq]t have a
|
|
|
|
|
file called "Hello.doc" and one called "hello.doc".
|
|
|
|
|
.PP
|
|
|
|
|
Amazon cloud drive has rate limiting so you may notice errors in the
|
|
|
|
|
sync (429 errors).
|
|
|
|
|
rclone will automatically retry the sync up to 3 times by default (see
|
|
|
|
|
\f[C]\-\-retries\f[] flag) which should hopefully work around this
|
|
|
|
|
problem.
|
|
|
|
|
.PP
|
|
|
|
|
Amazon cloud drive has an internal limit of file sizes that can be
|
|
|
|
|
uploaded to the service.
|
|
|
|
|
This limit is not officially published, but all files larger than this
|
|
|
|
|
will fail.
|
|
|
|
|
.PP
|
|
|
|
|
At the time of writing (Jan 2016) is in the area of 50GB per file.
|
|
|
|
|
This means that larger files are likely to fail.
|
|
|
|
|
.PP
|
|
|
|
|
Unfortunatly there is no way for rclone to see that this failure is
|
|
|
|
|
because of file size, so it will retry the operation, as any other
|
|
|
|
|
failure.
|
|
|
|
|
To avoid this problem, use \f[C]\-\-max\-size=50GB\f[] option to limit
|
|
|
|
|
the maximum size of uploaded files.
|
|
|
|
|
.SS Microsoft One Drive
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:path\f[]
|
|
|
|
|
.PP
|
|
|
|
|
Paths may be as deep as required, eg
|
|
|
|
|
\f[C]remote:directory/subdirectory\f[].
|
|
|
|
|
.PP
|
|
|
|
|
The initial setup for One Drive involves getting a token from Microsoft
|
|
|
|
|
which you need to do in your browser.
|
|
|
|
|
\f[C]rclone\ config\f[] walks you through it.
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of how to make a remote called \f[C]remote\f[].
|
|
|
|
|
First run:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\ rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
2016-03-01 10:00:01 +01:00
|
|
|
|
No\ remotes\ found\ \-\ make\ a\ new\ one
|
2016-02-07 21:26:10 +01:00
|
|
|
|
n)\ New\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
s)\ Set\ configuration\ password
|
|
|
|
|
n/s>\ n
|
2016-02-07 21:26:10 +01:00
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 9
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Microsoft\ App\ Client\ Id\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_id>\
|
|
|
|
|
Microsoft\ App\ Client\ Secret\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_secret>\
|
|
|
|
|
Remote\ config
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Use\ auto\ config?
|
|
|
|
|
\ *\ Say\ Y\ if\ not\ sure
|
|
|
|
|
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine
|
|
|
|
|
y)\ Yes
|
|
|
|
|
n)\ No
|
|
|
|
|
y/n>\ y
|
2016-02-07 21:26:10 +01:00
|
|
|
|
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
|
|
|
|
|
Log\ in\ and\ authorize\ rclone\ for\ access
|
|
|
|
|
Waiting\ for\ code...
|
|
|
|
|
Got\ code
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
client_id\ =\
|
|
|
|
|
client_secret\ =\
|
|
|
|
|
token\ =\ {"access_token":"XXXXXX"}
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
See the remote setup docs (http://rclone.org/remote_setup/) for how to
|
|
|
|
|
set it up on a machine with no Internet browser available.
|
|
|
|
|
.PP
|
|
|
|
|
Note that rclone runs a webserver on your local machine to collect the
|
|
|
|
|
token as returned from Microsoft.
|
|
|
|
|
This only runs from the moment it opens your browser to the moment you
|
|
|
|
|
get back the verification code.
|
|
|
|
|
This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you
|
|
|
|
|
to unblock it temporarily if you are running a host firewall.
|
|
|
|
|
.PP
|
|
|
|
|
Once configured you can then use \f[C]rclone\f[] like this,
|
|
|
|
|
.PP
|
|
|
|
|
List directories in top level of your One Drive
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List all the files in your One Drive
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
To copy a local directory to an One Drive directory called backup
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ /home/source\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Modified time and hashes
|
|
|
|
|
.PP
|
|
|
|
|
One Drive allows modification times to be set on objects accurate to 1
|
|
|
|
|
second.
|
|
|
|
|
These will be used to detect whether objects need syncing or not.
|
|
|
|
|
.PP
|
|
|
|
|
One drive supports SHA1 type hashes, so you can use
|
|
|
|
|
\f[C]\-\-checksum\f[] flag.
|
|
|
|
|
.SS Deleting files
|
|
|
|
|
.PP
|
|
|
|
|
Any files you delete with rclone will end up in the trash.
|
|
|
|
|
Microsoft doesn\[aq]t provide an API to permanently delete files, nor to
|
|
|
|
|
empty the trash, so you will have to do that with one of Microsoft\[aq]s
|
|
|
|
|
apps or via the One Drive website.
|
|
|
|
|
.SS Specific options
|
|
|
|
|
.PP
|
|
|
|
|
Here are the command line options specific to this cloud storage system.
|
|
|
|
|
.SS \-\-onedrive\-chunk\-size=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
Above this size files will be chunked \- must be multiple of 320k.
|
|
|
|
|
The default is 10MB.
|
|
|
|
|
Note that the chunks will be buffered into memory.
|
|
|
|
|
.SS \-\-onedrive\-upload\-cutoff=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
Cutoff for switching to chunked upload \- must be <= 100MB.
|
|
|
|
|
The default is 10MB.
|
|
|
|
|
.SS Limitations
|
|
|
|
|
.PP
|
|
|
|
|
Note that One Drive is case insensitive so you can\[aq]t have a file
|
|
|
|
|
called "Hello.doc" and one called "hello.doc".
|
|
|
|
|
.PP
|
|
|
|
|
Rclone only supports your default One Drive, and doesn\[aq]t work with
|
|
|
|
|
One Drive for business.
|
|
|
|
|
Both these issues may be fixed at some point depending on user demand!
|
|
|
|
|
.PP
|
|
|
|
|
There are quite a few characters that can\[aq]t be in One Drive file
|
|
|
|
|
names.
|
|
|
|
|
These can\[aq]t occur on Windows platforms, but on non\-Windows
|
|
|
|
|
platforms they are common.
|
|
|
|
|
Rclone will map these names to and from an identical looking unicode
|
|
|
|
|
equivalent.
|
|
|
|
|
For example if a file has a \f[C]?\f[] in it will be mapped to
|
|
|
|
|
\f[C]?\f[] instead.
|
|
|
|
|
.SS Hubic
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:path\f[]
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[]
|
|
|
|
|
for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg
|
|
|
|
|
\f[C]remote:container/path/to/dir\f[].
|
|
|
|
|
.PP
|
|
|
|
|
The initial setup for Hubic involves getting a token from Hubic which
|
|
|
|
|
you need to do in your browser.
|
|
|
|
|
\f[C]rclone\ config\f[] walks you through it.
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of how to make a remote called \f[C]remote\f[].
|
|
|
|
|
First run:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
\ rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
n)\ New\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
s)\ Set\ configuration\ password
|
|
|
|
|
n/s>\ n
|
2016-02-07 21:26:10 +01:00
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 7
|
|
|
|
|
Hubic\ Client\ Id\ \-\ leave\ blank\ normally.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
client_id>\
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Hubic\ Client\ Secret\ \-\ leave\ blank\ normally.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
client_secret>\
|
|
|
|
|
Remote\ config
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Use\ auto\ config?
|
|
|
|
|
\ *\ Say\ Y\ if\ not\ sure
|
|
|
|
|
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine
|
|
|
|
|
y)\ Yes
|
|
|
|
|
n)\ No
|
|
|
|
|
y/n>\ y
|
|
|
|
|
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Log\ in\ and\ authorize\ rclone\ for\ access
|
|
|
|
|
Waiting\ for\ code...
|
|
|
|
|
Got\ code
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
client_id\ =\
|
|
|
|
|
client_secret\ =\
|
|
|
|
|
token\ =\ {"access_token":"XXXXXX"}
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
See the remote setup docs (http://rclone.org/remote_setup/) for how to
|
|
|
|
|
set it up on a machine with no Internet browser available.
|
|
|
|
|
.PP
|
|
|
|
|
Note that rclone runs a webserver on your local machine to collect the
|
|
|
|
|
token as returned from Hubic.
|
|
|
|
|
This only runs from the moment it opens your browser to the moment you
|
|
|
|
|
get back the verification code.
|
|
|
|
|
This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you
|
|
|
|
|
to unblock it temporarily if you are running a host firewall.
|
|
|
|
|
.PP
|
|
|
|
|
Once configured you can then use \f[C]rclone\f[] like this,
|
|
|
|
|
.PP
|
|
|
|
|
List containers in the top level of your Hubic
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List all the files in your Hubic
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
To copy a local directory to an Hubic directory called backup
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ /home/source\ remote:backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.PP
|
|
|
|
|
If you want the directory to be visible in the official \f[I]Hubic
|
|
|
|
|
browser\f[], you need to copy your files to the \f[C]default\f[]
|
|
|
|
|
directory
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ copy\ /home/source\ remote:default/backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
The modified time is stored as metadata on the object as
|
|
|
|
|
\f[C]X\-Object\-Meta\-Mtime\f[] as floating point since the epoch
|
|
|
|
|
accurate to 1 ns.
|
|
|
|
|
.PP
|
|
|
|
|
This is a defacto standard (used in the official python\-swiftclient
|
|
|
|
|
amongst others) for storing the modification time for an object.
|
|
|
|
|
.PP
|
|
|
|
|
Note that Hubic wraps the Swift backend, so most of the properties of
|
|
|
|
|
are the same.
|
|
|
|
|
.SS Limitations
|
|
|
|
|
.PP
|
2016-04-18 19:30:29 +02:00
|
|
|
|
This uses the normal OpenStack Swift mechanism to refresh the Swift API
|
|
|
|
|
credentials and ignores the expires field returned by the Hubic API.
|
|
|
|
|
.PP
|
|
|
|
|
The Swift API doesn\[aq]t return a correct MD5SUM for segmented files
|
|
|
|
|
(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the
|
|
|
|
|
MD5SUM for these.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Backblaze B2
|
|
|
|
|
.PP
|
|
|
|
|
B2 is Backblaze\[aq]s cloud storage
|
|
|
|
|
system (https://www.backblaze.com/b2/).
|
|
|
|
|
.PP
|
|
|
|
|
Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for
|
|
|
|
|
the \f[C]lsd\f[] command.) You may put subdirectories in too, eg
|
|
|
|
|
\f[C]remote:bucket/path/to/dir\f[].
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of making a b2 configuration.
|
|
|
|
|
First run
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process.
|
|
|
|
|
You will need your account number (a short hex number) and key (a long
|
|
|
|
|
hex number) which you can get from the b2 control panel.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
No\ remotes\ found\ \-\ make\ a\ new\ one
|
|
|
|
|
n)\ New\ remote
|
|
|
|
|
q)\ Quit\ config
|
|
|
|
|
n/q>\ n
|
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 3
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Account\ ID
|
|
|
|
|
account>\ 123456789abc
|
|
|
|
|
Application\ Key
|
|
|
|
|
key>\ 0123456789abcdef0123456789abcdef0123456789
|
|
|
|
|
Endpoint\ for\ the\ service\ \-\ leave\ blank\ normally.
|
|
|
|
|
endpoint>\
|
|
|
|
|
Remote\ config
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
account\ =\ 123456789abc
|
|
|
|
|
key\ =\ 0123456789abcdef0123456789abcdef0123456789
|
|
|
|
|
endpoint\ =\
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This remote is called \f[C]remote\f[] and can now be used like this
|
|
|
|
|
.PP
|
|
|
|
|
See all buckets
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Make a new bucket
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ mkdir\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List the contents of a bucket
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Sync \f[C]/home/local/directory\f[] to the remote bucket, deleting any
|
|
|
|
|
excess files in the bucket.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ /home/local/directory\ remote:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
The modified time is stored as metadata on the object as
|
|
|
|
|
\f[C]X\-Bz\-Info\-src_last_modified_millis\f[] as milliseconds since
|
|
|
|
|
1970\-01\-01 in the Backblaze standard.
|
|
|
|
|
Other tools should be able to use this as a modified time.
|
|
|
|
|
.PP
|
2016-04-18 19:30:29 +02:00
|
|
|
|
Modified times are used in syncing and are fully supported except in the
|
|
|
|
|
case of updating a modification time on an existing object.
|
|
|
|
|
In this case the object will be uploaded again as B2 doesn\[aq]t have an
|
|
|
|
|
API method to set the modification time independent of doing an upload.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS SHA1 checksums
|
|
|
|
|
.PP
|
|
|
|
|
The SHA1 checksums of the files are checked on upload and download and
|
|
|
|
|
will be used in the syncing process.
|
|
|
|
|
You can use the \f[C]\-\-checksum\f[] flag.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.PP
|
|
|
|
|
Large files which are uploaded in chunks will store their SHA1 on the
|
|
|
|
|
object as \f[C]X\-Bz\-Info\-large_file_sha1\f[] as recommended by
|
|
|
|
|
Backblaze.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Versions
|
|
|
|
|
.PP
|
|
|
|
|
When rclone uploads a new version of a file it creates a new version of
|
|
|
|
|
it (https://www.backblaze.com/b2/docs/file_versions.html).
|
|
|
|
|
Likewise when you delete a file, the old version will still be
|
|
|
|
|
available.
|
|
|
|
|
.PP
|
|
|
|
|
The old versions of files are visible in the B2 web interface, but not
|
|
|
|
|
via rclone yet.
|
|
|
|
|
.PP
|
|
|
|
|
Rclone doesn\[aq]t provide any way of managing old versions (downloading
|
|
|
|
|
them or deleting them) at the moment.
|
|
|
|
|
When you \f[C]purge\f[] a bucket, all the old versions will be deleted.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.SS Transfers
|
|
|
|
|
.PP
|
|
|
|
|
Backblaze recommends that you do lots of transfers simultaneously for
|
|
|
|
|
maximum speed.
|
|
|
|
|
In tests from my SSD equiped laptop the optimum setting is about
|
|
|
|
|
\f[C]\-\-transfers\ 32\f[] though higher numbers may be used for a
|
|
|
|
|
slight speed improvement.
|
|
|
|
|
The optimum number for you may vary depending on your hardware, how big
|
|
|
|
|
the files are, how much you want to load your computer, etc.
|
|
|
|
|
The default of \f[C]\-\-transfers\ 4\f[] is definitely too low for
|
|
|
|
|
Backblaze B2 though.
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.SS Specific options
|
|
|
|
|
.PP
|
|
|
|
|
Here are the command line options specific to this cloud storage system.
|
|
|
|
|
.SS \-\-b2\-chunk\-size valuee=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
When uploading large files chunk the file into this size.
|
|
|
|
|
Note that these chunks are buffered in memory.
|
|
|
|
|
100,000,000 Bytes is the minimim size (default 96M).
|
|
|
|
|
.SS \-\-b2\-upload\-cutoff=SIZE
|
|
|
|
|
.PP
|
|
|
|
|
Cutoff for switching to chunked upload (default 4.657GiB == 5GB).
|
|
|
|
|
Files above this size will be uploaded in chunks of
|
|
|
|
|
\f[C]\-\-b2\-chunk\-size\f[].
|
|
|
|
|
The default value is the largest file which can be uploaded without
|
|
|
|
|
chunks.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS API
|
|
|
|
|
.PP
|
|
|
|
|
Here are some notes I made on the backblaze
|
|
|
|
|
API (https://gist.github.com/ncw/166dabf352b399f1cc1c) while integrating
|
2016-06-18 17:29:53 +02:00
|
|
|
|
it with rclone.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Yandex Disk
|
|
|
|
|
.PP
|
|
|
|
|
Yandex Disk (https://disk.yandex.com) is a cloud storage solution
|
|
|
|
|
created by Yandex (http://yandex.com).
|
|
|
|
|
.PP
|
|
|
|
|
Yandex paths may be as deep as required, eg
|
|
|
|
|
\f[C]remote:directory/subdirectory\f[].
|
|
|
|
|
.PP
|
|
|
|
|
Here is an example of making a yandex configuration.
|
|
|
|
|
First run
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ config
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
No\ remotes\ found\ \-\ make\ a\ new\ one
|
|
|
|
|
n)\ New\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
s)\ Set\ configuration\ password
|
|
|
|
|
n/s>\ n
|
2016-02-07 21:26:10 +01:00
|
|
|
|
name>\ remote
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Type\ of\ storage\ to\ configure.
|
|
|
|
|
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
|
|
|
|
|
\ 1\ /\ Amazon\ Cloud\ Drive
|
|
|
|
|
\ \ \ \\\ "amazon\ cloud\ drive"
|
|
|
|
|
\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph)
|
|
|
|
|
\ \ \ \\\ "s3"
|
|
|
|
|
\ 3\ /\ Backblaze\ B2
|
|
|
|
|
\ \ \ \\\ "b2"
|
|
|
|
|
\ 4\ /\ Dropbox
|
|
|
|
|
\ \ \ \\\ "dropbox"
|
|
|
|
|
\ 5\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
|
|
|
|
|
\ \ \ \\\ "google\ cloud\ storage"
|
|
|
|
|
\ 6\ /\ Google\ Drive
|
|
|
|
|
\ \ \ \\\ "drive"
|
|
|
|
|
\ 7\ /\ Hubic
|
|
|
|
|
\ \ \ \\\ "hubic"
|
|
|
|
|
\ 8\ /\ Local\ Disk
|
|
|
|
|
\ \ \ \\\ "local"
|
|
|
|
|
\ 9\ /\ Microsoft\ OneDrive
|
|
|
|
|
\ \ \ \\\ "onedrive"
|
|
|
|
|
10\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
|
|
|
|
|
\ \ \ \\\ "swift"
|
|
|
|
|
11\ /\ Yandex\ Disk
|
|
|
|
|
\ \ \ \\\ "yandex"
|
|
|
|
|
Storage>\ 11
|
2016-02-07 21:26:10 +01:00
|
|
|
|
Yandex\ Client\ Id\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_id>\
|
|
|
|
|
Yandex\ Client\ Secret\ \-\ leave\ blank\ normally.
|
|
|
|
|
client_secret>\
|
|
|
|
|
Remote\ config
|
2016-03-01 10:00:01 +01:00
|
|
|
|
Use\ auto\ config?
|
|
|
|
|
\ *\ Say\ Y\ if\ not\ sure
|
|
|
|
|
\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine
|
|
|
|
|
y)\ Yes
|
|
|
|
|
n)\ No
|
|
|
|
|
y/n>\ y
|
2016-02-07 21:26:10 +01:00
|
|
|
|
If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
|
|
|
|
|
Log\ in\ and\ authorize\ rclone\ for\ access
|
|
|
|
|
Waiting\ for\ code...
|
|
|
|
|
Got\ code
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
[remote]
|
|
|
|
|
client_id\ =\
|
|
|
|
|
client_secret\ =\
|
|
|
|
|
token\ =\ {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016\-12\-29T12:27:11.362788025Z"}
|
|
|
|
|
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
|
|
|
|
|
y)\ Yes\ this\ is\ OK
|
|
|
|
|
e)\ Edit\ this\ remote
|
|
|
|
|
d)\ Delete\ this\ remote
|
|
|
|
|
y/e/d>\ y
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
See the remote setup docs (http://rclone.org/remote_setup/) for how to
|
|
|
|
|
set it up on a machine with no Internet browser available.
|
|
|
|
|
.PP
|
|
|
|
|
Note that rclone runs a webserver on your local machine to collect the
|
|
|
|
|
token as returned from Yandex Disk.
|
|
|
|
|
This only runs from the moment it opens your browser to the moment you
|
|
|
|
|
get back the verification code.
|
|
|
|
|
This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you
|
|
|
|
|
to unblock it temporarily if you are running a host firewall.
|
|
|
|
|
.PP
|
|
|
|
|
Once configured you can then use \f[C]rclone\f[] like this,
|
|
|
|
|
.PP
|
|
|
|
|
See top level directories
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ lsd\ remote:
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Make a new directory
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ mkdir\ remote:directory
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
List the contents of a directory
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ ls\ remote:directory
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Sync \f[C]/home/local/directory\f[] to the remote path, deleting any
|
|
|
|
|
excess files in the path.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ /home/local/directory\ remote:directory
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
Modified times are supported and are stored accurate to 1 ns in custom
|
|
|
|
|
metadata called \f[C]rclone_modified\f[] in RFC3339 with nanoseconds
|
|
|
|
|
format.
|
|
|
|
|
.SS MD5 checksums
|
|
|
|
|
.PP
|
|
|
|
|
MD5 checksums are natively supported by Yandex Disk.
|
|
|
|
|
.SS Local Filesystem
|
|
|
|
|
.PP
|
|
|
|
|
Local paths are specified as normal filesystem paths, eg
|
|
|
|
|
\f[C]/path/to/wherever\f[], so
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ /home/source\ /tmp/destination
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
Will sync \f[C]/home/source\f[] to \f[C]/tmp/destination\f[]
|
|
|
|
|
.PP
|
|
|
|
|
These can be configured into the config file for consistencies sake, but
|
|
|
|
|
it is probably easier not to.
|
|
|
|
|
.SS Modified time
|
|
|
|
|
.PP
|
|
|
|
|
Rclone reads and writes the modified time using an accuracy determined
|
|
|
|
|
by the OS.
|
|
|
|
|
Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
|
|
|
|
|
.SS Filenames
|
|
|
|
|
.PP
|
|
|
|
|
Filenames are expected to be encoded in UTF\-8 on disk.
|
|
|
|
|
This is the normal case for Windows and OS X.
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.PP
|
2016-02-07 21:26:10 +01:00
|
|
|
|
There is a bit more uncertainty in the Linux world, but new
|
|
|
|
|
distributions will have UTF\-8 encoded files names.
|
2016-03-01 10:00:01 +01:00
|
|
|
|
If you are using an old Linux filesystem with non UTF\-8 file names (eg
|
|
|
|
|
latin1) then you can use the \f[C]convmv\f[] tool to convert the
|
|
|
|
|
filesystem to UTF\-8.
|
|
|
|
|
This tool is available in most distributions\[aq] package managers.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.PP
|
|
|
|
|
If an invalid (non\-UTF8) filename is read, the invalid caracters will
|
|
|
|
|
be replaced with the unicode replacement character, \[aq]<5D>\[aq].
|
|
|
|
|
\f[C]rclone\f[] will emit a debug message in this case (use \f[C]\-v\f[]
|
|
|
|
|
to see), eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
Local\ file\ system\ at\ .:\ Replacing\ invalid\ UTF\-8\ characters\ in\ "gro\\xdf"
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Long paths on Windows
|
|
|
|
|
.PP
|
|
|
|
|
Rclone handles long paths automatically, by converting all paths to long
|
|
|
|
|
UNC
|
|
|
|
|
paths (https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx#maxpath)
|
|
|
|
|
which allows paths up to 32,767 characters.
|
|
|
|
|
.PP
|
|
|
|
|
This is why you will see that your paths, for instance
|
|
|
|
|
\f[C]c:\\files\f[] is converted to the UNC path
|
|
|
|
|
\f[C]\\\\?\\c:\\files\f[] in the output, and \f[C]\\\\server\\share\f[]
|
|
|
|
|
is converted to \f[C]\\\\?\\UNC\\server\\share\f[].
|
|
|
|
|
.PP
|
|
|
|
|
However, in rare cases this may cause problems with buggy file system
|
|
|
|
|
drivers like EncFS (https://github.com/ncw/rclone/issues/261).
|
|
|
|
|
To disable UNC conversion globally, add this to your
|
|
|
|
|
\f[C]\&.rclone.conf\f[] file:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
[local]
|
|
|
|
|
nounc\ =\ true
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
If you want to selectively disable UNC, you can add it to a separate
|
|
|
|
|
entry like this:
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
[nounc]
|
|
|
|
|
type\ =\ local
|
|
|
|
|
nounc\ =\ true
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
And use rclone like this:
|
|
|
|
|
.PP
|
|
|
|
|
\f[C]rclone\ copy\ c:\\src\ nounc:z:\\dst\f[]
|
|
|
|
|
.PP
|
|
|
|
|
This will use UNC paths on \f[C]c:\\src\f[] but not on \f[C]z:\\dst\f[].
|
|
|
|
|
Of course this will cause problems if the absolute path length of a file
|
|
|
|
|
exceeds 258 characters on z, so only use this option if you have to.
|
|
|
|
|
.SS Changelog
|
|
|
|
|
.IP \[bu] 2
|
2016-06-18 17:58:00 +02:00
|
|
|
|
v1.30 \- 2016\-06\-18
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New Features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Directory listing code reworked for more features and better error
|
|
|
|
|
reporting (thanks to Klaus Post for help).
|
|
|
|
|
This enables
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Directory include filtering for efficiency
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\-\-max\-depth parameter
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Better error reporting
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
More to come
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Retry more errors
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \-\-ignore\-size flag \- for uploading images to onedrive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Log \-v output to stdout by default
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Display the transfer stats in more human readable form
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make 0 size files specifiable with \f[C]\-\-max\-size\ 0b\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \f[C]b\f[] suffix so we can specify bytes in \-\-bwlimit,
|
|
|
|
|
\-\-min\-size etc
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Use "password:" instead of "password>" prompt \- thanks Klaus Post and
|
|
|
|
|
Leigh Klotz
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Bug Fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix retry doing one too many retries
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Local
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix problems with OS X and UTF\-8 characters
|
|
|
|
|
.IP \[bu] 2
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Check a file exists before uploading to help with 408 Conflict errors
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Reauth on 401 errors \- this has been causing a lot of problems
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Work around spurious 403 errors
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Restart directory listings on error
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Check a file exists before uploading to help with duplicates
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix retry of multipart uploads
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Backblaze B2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement large file uploading
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add AES256 server\-side encryption for \- thanks Justin R.
|
|
|
|
|
Wilson
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Cloud Storage
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make sure we don\[aq]t use conflicting content types on upload
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add service account support \- thanks Michal Witkowski
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add auth version parameter
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add domain option for openstack (v3 auth) \- thanks Fabian Ruff
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
2016-04-18 19:30:29 +02:00
|
|
|
|
v1.29 \- 2016\-04\-18
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New Features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \f[C]\-I,\ \-\-ignore\-times\f[] for unconditional upload
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Improve \f[C]dedupe\f[]command
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Now removes identical copies without asking
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Now obeys \f[C]\-\-dry\-run\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \f[C]\-\-dedupe\-mode\f[] for non interactive running
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ interactive\f[] \- interactive the default.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ skip\f[] \- removes identical files then skips
|
|
|
|
|
anything left.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ first\f[] \- removes identical files then keeps
|
|
|
|
|
the first one.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ newest\f[] \- removes identical files then keeps
|
|
|
|
|
the newest one.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ oldest\f[] \- removes identical files then keeps
|
|
|
|
|
the oldest one.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]\-\-dedupe\-mode\ rename\f[] \- removes identical files then
|
|
|
|
|
renames the rest to be different.
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Bug fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make rclone check obey the \f[C]\-\-size\-only\f[] flag.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Use "application/octet\-stream" if discovered mime type is invalid.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix missing "quit" option when there are no remotes.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Increase default chunk size to 8 MB \- increases upload speed of big
|
|
|
|
|
files
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Speed up directory listings and make more reliable
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add missing retries for Move and DirMove \- increases reliability
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Preserve mime type on file update
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Backblaze B2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Enable mod time syncing
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
This means that B2 will now check modification times
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
It will upload new files to update the modification times
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
(there isn\[aq]t an API to just set the mod time.)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
If you want the old behaviour use \f[C]\-\-size\-only\f[].
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Update API to new version
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix parsing of mod time when not in metadata
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift/Hubic
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Don\[aq]t return an MD5SUM for static large objects
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix uploading files bigger than 50GB
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
2016-03-01 18:42:27 +01:00
|
|
|
|
v1.28 \- 2016\-03\-01
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New Features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Configuration file encryption \- thanks Klaus Post
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Improve \f[C]rclone\ config\f[] adding more help and making it easier to
|
|
|
|
|
understand
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \f[C]\-u\f[]/\f[C]\-\-update\f[] so creation times can be used
|
|
|
|
|
on all remotes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \f[C]\-\-low\-level\-retries\f[] flag
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Optionally disable gzip compression on downloads with
|
|
|
|
|
\f[C]\-\-no\-gzip\-encoding\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Bug fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Don\[aq]t make directories if \f[C]\-\-dry\-run\f[] set
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix and document the \f[C]move\f[] command
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix redirecting stderr on unix\-like OSes when using
|
|
|
|
|
\f[C]\-\-log\-file\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix \f[C]delete\f[] command to wait until all finished \- fixes missing
|
|
|
|
|
deletes.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Backblaze B2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Use one upload URL per go routine fixes
|
|
|
|
|
\f[C]more\ than\ one\ upload\ using\ auth\ token\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add pacing, retries and reauthentication \- fixes token expiry problems
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Upload without using a temporary file from local (and remotes which
|
|
|
|
|
support SHA1)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix reading metadata for all files when it shouldn\[aq]t have been
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix listing drive documents at root
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Disable copy and move for Google docs
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix uploading of chunked files with non ASCII characters
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Allow setting of \f[C]storage_url\f[] in the config \- thanks Xavier
|
|
|
|
|
Lucas
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Allow IAM role and credentials from environment variables \- thanks
|
|
|
|
|
Brian Stengaard
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Allow low privilege users to use S3 (check if directory exists during
|
|
|
|
|
Mkdir) \- thanks Jakub Gedeon
|
|
|
|
|
.IP \[bu] 2
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive
|
2016-03-01 18:42:27 +01:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Retry on more things to make directory listings more reliable
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
2016-02-07 21:26:10 +01:00
|
|
|
|
v1.27 \- 2016\-01\-31
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New Features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Easier headless configuration with \f[C]rclone\ authorize\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add support for multiple hash types \- we now check SHA1 as well as MD5
|
|
|
|
|
hashes.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]delete\f[] command which does obey the filters (unlike
|
|
|
|
|
\f[C]purge\f[])
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[C]dedupe\f[] command to deduplicate a remote.
|
|
|
|
|
Useful with Google Drive.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \f[C]\-\-ignore\-existing\f[] flag to skip all files that exist on
|
|
|
|
|
destination.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \f[C]\-\-delete\-before\f[], \f[C]\-\-delete\-during\f[],
|
|
|
|
|
\f[C]\-\-delete\-after\f[] flags.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \f[C]\-\-memprofile\f[] flag to debug memory use.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Warn the user about files with same name but different case
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make \f[C]\-\-include\f[] rules add their implict exclude * at the end
|
|
|
|
|
of the filter list
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Deprecate compiling with go1.3
|
|
|
|
|
.IP \[bu] 2
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix download of files > 10 GB
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix directory traversal ("Next token is expired") for large directory
|
|
|
|
|
listings
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Remove 409 conflict from error codes we will retry \- stops very long
|
|
|
|
|
pauses
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Backblaze B2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
SHA1 hashes now checked by rclone core
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \f[C]\-\-drive\-auth\-owner\-only\f[] to only consider files owned
|
|
|
|
|
by the user \- thanks Björn Harrtell
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Export Google documents
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Dropbox
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make file exclusion error controllable with \-q
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix upload from unprivileged user.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix updating of mod times of files with \f[C]+\f[] in.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Local
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add local file system option to disable UNC on Windows.
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.26 \- 2016\-01\-02
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New Features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Yandex storage backend \- thank you Dmitry Burdeev ("dibu")
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement Backblaze B2 storage backend
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \-\-min\-age and \-\-max\-age flags \- thank you Adriano Aurélio
|
|
|
|
|
Meirelles
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make ls/lsl/md5sum/size/check obey includes and excludes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix crash in http logging
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Upload releases to github too
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix sync for chunked files
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
One Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Re\-enable server side copy
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Don\[aq]t mask HTTP error codes with JSON decode error
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix corrupting Content\-Type on mod time update (thanks Joseph Spurrier)
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.25 \- 2015\-11\-14
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement Hubic storage system
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix deletion of some excluded files without \-\-delete\-excluded
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
This could have deleted files unexpectedly on sync
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Always check first with \f[C]\-\-dry\-run\f[]!
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Stop SetModTime losing metadata (eg X\-Object\-Manifest)
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
This could have caused data loss for files > 5GB in size
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Use ContentType from Object to avoid lookups in listings
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
One Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
disable server side copy as it seems to be broken at Microsoft
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.24 \- 2015\-11\-07
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add support for Microsoft One Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \f[C]\-\-no\-check\-certificate\f[] option to disable server
|
|
|
|
|
certificate verification
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add async readahead buffer for faster transfer of big files
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Allow spaces in remotes and check remote names for validity at creation
|
|
|
|
|
time
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Allow \[aq]&\[aq] and disallow \[aq]:\[aq] in Windows filenames.
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Ignore directory marker objects where appropriate \- allows working with
|
|
|
|
|
Hubic
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Don\[aq]t delete the container if fs wasn\[aq]t at root
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Don\[aq]t delete the bucket if fs wasn\[aq]t at root
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Cloud Storage
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Don\[aq]t delete the bucket if fs wasn\[aq]t at root
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.23 \- 2015\-10\-03
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \f[C]rclone\ size\f[] for measuring remotes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix headless config for drive and gcs
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Tell the user they should try again if the webserver method failed
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Improve output of \f[C]\-\-dump\-headers\f[]
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
S3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Allow anonymous access to public buckets
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Stop chunked operations logging "Failed to read info: Object Not Found"
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Use Content\-Length on uploads for extra reliability
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.22 \- 2015\-09\-28
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement rsync like include and exclude flags
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
swift
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Support files > 5GB \- thanks Sergey Tolmachev
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.21 \- 2015\-09\-22
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Display individual transfer progress
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make lsl output times in localtime
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix allowing user to override credentials again in Drive, GCS and ACD
|
|
|
|
|
.IP \[bu] 2
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement compliant pacing scheme
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google Drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make directory reads concurrent for increased speed.
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.20 \- 2015\-09\-15
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New features
|
|
|
|
|
.IP \[bu] 2
|
2016-07-11 13:42:44 +02:00
|
|
|
|
Amazon Drive support
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Oauth support redone \- fix many bugs and improve usability
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Use "golang.org/x/oauth2" as oauth libary of choice
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Improve oauth usability for smoother initial signup
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive, googlecloudstorage: optionally use auto config for the oauth
|
|
|
|
|
token
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \-\-dump\-headers and \-\-dump\-bodies debug flags
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Show multiple matched commands if abbreviation too short
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement server side move where possible
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
local
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Always use UNC paths internally on Windows \- fixes a lot of bugs
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
dropbox
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
force use of our custom transport which makes timeouts work
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Thanks to Klaus Post for lots of help with this release
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.19 \- 2015\-08\-28
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
New features
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Server side copies for s3/swift/drive/dropbox/gcs
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Move command \- uses server side copies if it can
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \-\-retries flag \- tries 3 times by default
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Build for plan9/amd64 and solaris/amd64 too
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make a current version download with a fixed URL for scripting
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Ignore rmdir in limited fs rather than throwing error
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
dropbox
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Increase chunk size to improve upload speeds massively
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Issue an error message when trying to upload bad file name
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.18 \- 2015\-08\-17
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \f[C]\-\-drive\-use\-trash\f[] flag so rclone trashes instead of
|
|
|
|
|
deletes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add "Forbidden to download" message for files with no downloadURL
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
dropbox
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Remove datastore
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
This was deprecated and it caused a lot of problems
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Modification times and MD5SUMs no longer stored
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix uploading files > 2GB
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
s3
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
use official AWS SDK from github.com/aws/aws\-sdk\-go
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
\f[B]NB\f[] will most likely require you to delete and recreate remote
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
enable multipart upload which enables files > 5GB
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
tested with Ceph / RadosGW / S3 emulation
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
many thanks to Sam Liston and Brian Haymore at the Utah Center for High
|
|
|
|
|
Performance Computing (https://www.chpc.utah.edu/) for a Ceph test
|
|
|
|
|
account
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
misc
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Show errors when reading the config file
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Do not print stats in quiet mode \- thanks Leonid Shalupov
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add FAQ
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix created directories not obeying umask
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Linux installation instructions \- thanks Shimon Doodkin
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.17 \- 2015\-06\-14
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
dropbox: fix case insensitivity issues \- thanks Leonid Shalupov
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.16 \- 2015\-06\-09
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix uploading big files which was causing timeouts or panics
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Don\[aq]t check md5sum after download with \-\-size\-only
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.15 \- 2015\-06\-06
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \-\-checksum flag to only discard transfers by MD5SUM \- thanks Alex
|
|
|
|
|
Couper
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \-\-size\-only flag to sync on size not checksum & modtime
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Expand docs and remove duplicated information
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Document rclone\[aq]s limitations with directories
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
dropbox: update docs about case insensitivity
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.14 \- 2015\-05\-21
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
local: fix encoding of non utf\-8 file names \- fixes a duplicate file
|
|
|
|
|
problem
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: docs about rate limiting
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
google cloud storage: Fix compile after API change in
|
|
|
|
|
"google.golang.org/api/storage/v1"
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.13 \- 2015\-05\-10
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Revise documentation (especially sync)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \-\-timeout and \-\-conntimeout
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
s3: ignore etags from multipart uploads which aren\[aq]t md5sums
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.12 \- 2015\-03\-15
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: Use chunked upload for files above a certain size
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: add \-\-drive\-chunk\-size and \-\-drive\-upload\-cutoff
|
|
|
|
|
parameters
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: switch to insert from update when a failed copy deletes the
|
|
|
|
|
upload
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
core: Log duplicate files if they are detected
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.11 \- 2015\-03\-04
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
swift: add region parameter
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: fix crash on failed to update remote mtime
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
In remote paths, change native directory separators to /
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add synchronization to ls/lsl/lsd output to stop corruptions
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Ensure all stats/log messages to go stderr
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add \-\-log\-file flag to log everything (including panics) to file
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make it possible to disable stats printing with \-\-stats=0
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement \-\-bwlimit to limit data transfer bandwidth
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.10 \- 2015\-02\-12
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
s3: list an unlimited number of items
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix getting stuck in the configurator
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.09 \- 2015\-02\-07
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
windows: Stop drive letters (eg C:) getting mixed up with remotes (eg
|
|
|
|
|
drive:)
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
local: Fix directory separators on Windows
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: fix rate limit exceeded errors
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.08 \- 2015\-02\-04
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: fix subdirectory listing to not list entire drive
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: Fix SetModTime
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
dropbox: adapt code to recent library changes
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.07 \- 2014\-12\-23
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
google cloud storage: fix memory leak
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.06 \- 2014\-12\-12
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix "Couldn\[aq]t find home directory" on OSX
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
swift: Add tenant parameter
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Use new location of Google API packages
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.05 \- 2014\-08\-09
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Improved tests and consequently lots of minor fixes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
core: Fix race detected by go race detector
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
core: Fixes after running errcheck
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: reset root directory on Rmdir and Purge
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
fs: Document that Purger returns error on empty directory, test and fix
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
google cloud storage: fix ListDir on subdirectory
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
google cloud storage: re\-read metadata in SetModTime
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
s3: make reading metadata more reliable to work around eventual
|
|
|
|
|
consistency problems
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
s3: strip trailing / from ListDir()
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
swift: return directories without / in ListDir
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.04 \- 2014\-07\-21
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
google cloud storage: Fix crash on Update
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.03 \- 2014\-07\-20
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
swift, s3, dropbox: fix updated files being marked as corrupted
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make compile with go 1.1 again
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.02 \- 2014\-07\-19
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement Dropbox remote
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement Google Cloud Storage remote
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Verify Md5sums and Sizes after copies
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Remove times from "ls" command \- lists sizes only
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add add "lsl" \- lists times and sizes
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add "md5sum" command
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.01 \- 2014\-07\-04
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: fix transfer of big files using up lots of memory
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v1.00 \- 2014\-07\-03
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: fix whole second dates
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.99 \- 2014\-06\-26
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fix \-\-dry\-run not working
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make compatible with go 1.1
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.98 \- 2014\-05\-30
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
s3: Treat missing Content\-Length as 0 for some ceph installations
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
rclonetest: add file with a space in
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.97 \- 2014\-05\-05
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Implement copying of single files
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
s3 & swift: support paths inside containers/buckets
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.96 \- 2014\-04\-24
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: Fix multiple files of same name being created
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: Use o.Update and fs.Put to optimise transfers
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Add version number, \-V and \-\-version
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.95 \- 2014\-03\-28
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
rclone.org: website, docs and graphics
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: fix path parsing
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.94 \- 2014\-03\-27
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Change remote format one last time
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
GNU style flags
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.93 \- 2014\-03\-16
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
drive: store token in config file
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
cross compile other versions
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
set strict permissions on config file
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.92 \- 2014\-03\-15
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Config fixes and \-\-config option
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.91 \- 2014\-03\-15
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Make config file
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.90 \- 2013\-06\-27
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Project named rclone
|
|
|
|
|
.RE
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
v0.00 \- 2012\-11\-18
|
|
|
|
|
.RS 2
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Project started
|
|
|
|
|
.RE
|
|
|
|
|
.SS Bugs and Limitations
|
|
|
|
|
.SS Empty directories are left behind / not created
|
|
|
|
|
.PP
|
|
|
|
|
With remotes that have a concept of directory, eg Local and Drive, empty
|
|
|
|
|
directories may be left behind, or not created when one was expected.
|
|
|
|
|
.PP
|
|
|
|
|
This is because rclone doesn\[aq]t have a concept of a directory \- it
|
|
|
|
|
only works on objects.
|
|
|
|
|
Most of the object storage systems can\[aq]t actually store a directory
|
|
|
|
|
so there is nowhere for rclone to store anything about directories.
|
|
|
|
|
.PP
|
|
|
|
|
You can work round this to some extent with the\f[C]purge\f[] command
|
|
|
|
|
which will delete everything under the path, \f[B]inluding\f[] empty
|
|
|
|
|
directories.
|
|
|
|
|
.PP
|
|
|
|
|
This may be fixed at some point in Issue
|
|
|
|
|
#100 (https://github.com/ncw/rclone/issues/100)
|
|
|
|
|
.SS Directory timestamps aren\[aq]t preserved
|
|
|
|
|
.PP
|
|
|
|
|
For the same reason as the above, rclone doesn\[aq]t have a concept of a
|
|
|
|
|
directory \- it only works on objects, therefore it can\[aq]t preserve
|
|
|
|
|
the timestamps of directories.
|
|
|
|
|
.SS Frequently Asked Questions
|
|
|
|
|
.SS Do all cloud storage systems support all rclone commands
|
|
|
|
|
.PP
|
|
|
|
|
Yes they do.
|
|
|
|
|
All the rclone commands (eg \f[C]sync\f[], \f[C]copy\f[] etc) will work
|
|
|
|
|
on all the remote storage systems.
|
|
|
|
|
.SS Can I copy the config from one machine to another
|
|
|
|
|
.PP
|
|
|
|
|
Sure! Rclone stores all of its config in a single file.
|
|
|
|
|
If you want to find this file, the simplest way is to run
|
|
|
|
|
\f[C]rclone\ \-h\f[] and look at the help for the \f[C]\-\-config\f[]
|
|
|
|
|
flag which will tell you where it is.
|
|
|
|
|
.PP
|
|
|
|
|
See the remote setup docs (http://rclone.org/remote_setup/) for more
|
|
|
|
|
info.
|
|
|
|
|
.SS How do I configure rclone on a remote / headless box with no
|
|
|
|
|
browser?
|
|
|
|
|
.PP
|
|
|
|
|
This has now been documented in its own remote setup
|
|
|
|
|
page (http://rclone.org/remote_setup/).
|
|
|
|
|
.SS Can rclone sync directly from drive to s3
|
|
|
|
|
.PP
|
|
|
|
|
Rclone can sync between two remote cloud storage systems just fine.
|
|
|
|
|
.PP
|
|
|
|
|
Note that it effectively downloads the file and uploads it again, so the
|
|
|
|
|
node running rclone would need to have lots of bandwidth.
|
|
|
|
|
.PP
|
|
|
|
|
The syncs would be incremental (on a file by file basis).
|
|
|
|
|
.PP
|
|
|
|
|
Eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
rclone\ sync\ drive:Folder\ s3:bucket
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Using rclone from multiple locations at the same time
|
|
|
|
|
.PP
|
|
|
|
|
You can use rclone from multiple places at the same time if you choose
|
|
|
|
|
different subdirectory for the output, eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
Server\ A>\ rclone\ sync\ /tmp/whatever\ remote:ServerA
|
|
|
|
|
Server\ B>\ rclone\ sync\ /tmp/whatever\ remote:ServerB
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
If you sync to the same directory then you should use rclone copy
|
|
|
|
|
otherwise the two rclones may delete each others files, eg
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
Server\ A>\ rclone\ copy\ /tmp/whatever\ remote:Backup
|
|
|
|
|
Server\ B>\ rclone\ copy\ /tmp/whatever\ remote:Backup
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
The file names you upload from Server A and Server B should be different
|
|
|
|
|
in this case, otherwise some file systems (eg Drive) may make
|
|
|
|
|
duplicates.
|
|
|
|
|
.SS Why doesn\[aq]t rclone support partial transfers / binary diffs like
|
|
|
|
|
rsync?
|
|
|
|
|
.PP
|
|
|
|
|
Rclone stores each file you transfer as a native object on the remote
|
|
|
|
|
cloud storage system.
|
|
|
|
|
This means that you can see the files you upload as expected using
|
|
|
|
|
alternative access methods (eg using the Google Drive web interface).
|
|
|
|
|
There is a 1:1 mapping between files on your hard disk and objects
|
|
|
|
|
created in the cloud storage system.
|
|
|
|
|
.PP
|
|
|
|
|
Cloud storage systems (at least none I\[aq]ve come across yet) don\[aq]t
|
|
|
|
|
support partially uploading an object.
|
|
|
|
|
You can\[aq]t take an existing object, and change some bytes in the
|
|
|
|
|
middle of it.
|
|
|
|
|
.PP
|
|
|
|
|
It would be possible to make a sync system which stored binary diffs
|
|
|
|
|
instead of whole objects like rclone does, but that would break the 1:1
|
|
|
|
|
mapping of files on your hard disk to objects in the remote cloud
|
|
|
|
|
storage system.
|
|
|
|
|
.PP
|
|
|
|
|
All the cloud storage systems support partial downloads of content, so
|
|
|
|
|
it would be possible to make partial downloads work.
|
|
|
|
|
However to make this work efficiently this would require storing a
|
|
|
|
|
significant amount of metadata, which breaks the desired 1:1 mapping of
|
|
|
|
|
files to objects.
|
|
|
|
|
.SS Can rclone do bi\-directional sync?
|
|
|
|
|
.PP
|
|
|
|
|
No, not at present.
|
|
|
|
|
rclone only does uni\-directional sync from A \-> B.
|
|
|
|
|
It may do in the future though since it has all the primitives \- it
|
|
|
|
|
just requires writing the algorithm to do it.
|
|
|
|
|
.SS Can I use rclone with an HTTP proxy?
|
|
|
|
|
.PP
|
|
|
|
|
Yes.
|
|
|
|
|
rclone will use the environment variables \f[C]HTTP_PROXY\f[],
|
|
|
|
|
\f[C]HTTPS_PROXY\f[] and \f[C]NO_PROXY\f[], similar to cURL and other
|
|
|
|
|
programs.
|
|
|
|
|
.PP
|
|
|
|
|
\f[C]HTTPS_PROXY\f[] takes precedence over \f[C]HTTP_PROXY\f[] for https
|
|
|
|
|
requests.
|
|
|
|
|
.PP
|
|
|
|
|
The environment values may be either a complete URL or a "host[:port]",
|
|
|
|
|
in which case the "http" scheme is assumed.
|
|
|
|
|
.PP
|
|
|
|
|
The \f[C]NO_PROXY\f[] allows you to disable the proxy for specific
|
|
|
|
|
hosts.
|
|
|
|
|
Hosts must be comma separated, and can contain domains or parts.
|
|
|
|
|
For instance "foo.com" also matches "bar.foo.com".
|
|
|
|
|
.SS Rclone gives x509: failed to load system roots and no roots provided
|
|
|
|
|
error
|
|
|
|
|
.PP
|
|
|
|
|
This means that \f[C]rclone\f[] can\[aq]t file the SSL root
|
|
|
|
|
certificates.
|
|
|
|
|
Likely you are running \f[C]rclone\f[] on a NAS with a cut\-down Linux
|
2016-06-18 17:29:53 +02:00
|
|
|
|
OS, or possibly on Solaris.
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.PP
|
|
|
|
|
Rclone (via the Go runtime) tries to load the root certificates from
|
|
|
|
|
these places on Linux.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
"/etc/ssl/certs/ca\-certificates.crt",\ //\ Debian/Ubuntu/Gentoo\ etc.
|
|
|
|
|
"/etc/pki/tls/certs/ca\-bundle.crt",\ \ \ //\ Fedora/RHEL
|
|
|
|
|
"/etc/ssl/ca\-bundle.pem",\ \ \ \ \ \ \ \ \ \ \ \ \ //\ OpenSUSE
|
|
|
|
|
"/etc/pki/tls/cacert.pem",\ \ \ \ \ \ \ \ \ \ \ \ //\ OpenELEC
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.PP
|
|
|
|
|
So doing something like this should fix the problem.
|
|
|
|
|
It also sets the time which is important for SSL to work properly.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
mkdir\ \-p\ /etc/ssl/certs/
|
|
|
|
|
curl\ \-o\ /etc/ssl/certs/ca\-certificates.crt\ https://raw.githubusercontent.com/bagder/ca\-bundle/master/ca\-bundle.crt
|
|
|
|
|
ntpclient\ \-s\ \-h\ pool.ntp.org
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.PP
|
|
|
|
|
Note that you may need to add the \f[C]\-\-insecure\f[] option to the
|
|
|
|
|
\f[C]curl\f[] command line if it doesn\[aq]t work without.
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
curl\ \-\-insecure\ \-o\ /etc/ssl/certs/ca\-certificates.crt\ https://raw.githubusercontent.com/bagder/ca\-bundle/master/ca\-bundle.crt
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Rclone gives Failed to load config file: function not implemented
|
|
|
|
|
error
|
|
|
|
|
.PP
|
|
|
|
|
Likely this means that you are running rclone on Linux version not
|
|
|
|
|
supported by the go runtime, ie earlier than version 2.6.23.
|
|
|
|
|
.PP
|
|
|
|
|
See the system requirements section in the go install
|
|
|
|
|
docs (https://golang.org/doc/install) for full details.
|
2016-04-18 19:30:29 +02:00
|
|
|
|
.SS All my uploaded docx/xlsx/pptx files appear as archive/zip
|
|
|
|
|
.PP
|
|
|
|
|
This is caused by uploading these files from a Windows computer which
|
|
|
|
|
hasn\[aq]t got the Microsoft Office suite installed.
|
|
|
|
|
The easiest way to fix is to install the Word viewer and the Microsoft
|
|
|
|
|
Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later
|
|
|
|
|
versions\[aq] file formats
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS License
|
|
|
|
|
.PP
|
|
|
|
|
This is free software under the terms of MIT the license (check the
|
|
|
|
|
COPYING file included with the source code).
|
|
|
|
|
.IP
|
|
|
|
|
.nf
|
|
|
|
|
\f[C]
|
|
|
|
|
Copyright\ (C)\ 2012\ by\ Nick\ Craig\-Wood\ http://www.craig\-wood.com/nick/
|
|
|
|
|
|
|
|
|
|
Permission\ is\ hereby\ granted,\ free\ of\ charge,\ to\ any\ person\ obtaining\ a\ copy
|
|
|
|
|
of\ this\ software\ and\ associated\ documentation\ files\ (the\ "Software"),\ to\ deal
|
|
|
|
|
in\ the\ Software\ without\ restriction,\ including\ without\ limitation\ the\ rights
|
|
|
|
|
to\ use,\ copy,\ modify,\ merge,\ publish,\ distribute,\ sublicense,\ and/or\ sell
|
|
|
|
|
copies\ of\ the\ Software,\ and\ to\ permit\ persons\ to\ whom\ the\ Software\ is
|
|
|
|
|
furnished\ to\ do\ so,\ subject\ to\ the\ following\ conditions:
|
|
|
|
|
|
|
|
|
|
The\ above\ copyright\ notice\ and\ this\ permission\ notice\ shall\ be\ included\ in
|
|
|
|
|
all\ copies\ or\ substantial\ portions\ of\ the\ Software.
|
|
|
|
|
|
|
|
|
|
THE\ SOFTWARE\ IS\ PROVIDED\ "AS\ IS",\ WITHOUT\ WARRANTY\ OF\ ANY\ KIND,\ EXPRESS\ OR
|
|
|
|
|
IMPLIED,\ INCLUDING\ BUT\ NOT\ LIMITED\ TO\ THE\ WARRANTIES\ OF\ MERCHANTABILITY,
|
|
|
|
|
FITNESS\ FOR\ A\ PARTICULAR\ PURPOSE\ AND\ NONINFRINGEMENT.\ IN\ NO\ EVENT\ SHALL\ THE
|
|
|
|
|
AUTHORS\ OR\ COPYRIGHT\ HOLDERS\ BE\ LIABLE\ FOR\ ANY\ CLAIM,\ DAMAGES\ OR\ OTHER
|
|
|
|
|
LIABILITY,\ WHETHER\ IN\ AN\ ACTION\ OF\ CONTRACT,\ TORT\ OR\ OTHERWISE,\ ARISING\ FROM,
|
|
|
|
|
OUT\ OF\ OR\ IN\ CONNECTION\ WITH\ THE\ SOFTWARE\ OR\ THE\ USE\ OR\ OTHER\ DEALINGS\ IN
|
|
|
|
|
THE\ SOFTWARE.
|
|
|
|
|
\f[]
|
|
|
|
|
.fi
|
|
|
|
|
.SS Authors
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Nick Craig\-Wood <nick@craig-wood.com>
|
|
|
|
|
.SS Contributors
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Alex Couper <amcouper@gmail.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Leonid Shalupov <leonid@shalupov.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Shimon Doodkin <helpmepro1@gmail.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Colin Nicholson <colin@colinn.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Klaus Post <klauspost@gmail.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Sergey Tolmachev <tolsi.ru@gmail.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Adriano Aurélio Meirelles <adriano@atinge.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
C.
|
|
|
|
|
Bess <cbess@users.noreply.github.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Dmitry Burdeev <dibu28@gmail.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Joseph Spurrier <github@josephspurrier.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Björn Harrtell <bjorn@wololo.org>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Xavier Lucas <xavier.lucas@corp.ovh.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Werner Beroux <werner@beroux.com>
|
2016-03-01 10:00:01 +01:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Brian Stengaard <brian@stengaard.eu>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Jakub Gedeon <jgedeon@sofi.com>
|
2016-06-18 17:29:53 +02:00
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Jim Tittsler <jwt@onjapan.net>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Michal Witkowski <michal@improbable.io>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Fabian Ruff <fabian.ruff@sap.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Leigh Klotz <klotz@quixey.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Romain Lapray <lapray.romain@gmail.com>
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Justin R.
|
|
|
|
|
Wilson <jrw972@gmail.com>
|
2016-02-07 21:26:10 +01:00
|
|
|
|
.SS Contact the rclone project
|
|
|
|
|
.PP
|
|
|
|
|
The project website is at:
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
https://github.com/ncw/rclone
|
|
|
|
|
.PP
|
|
|
|
|
There you can file bug reports, ask for help or contribute pull
|
|
|
|
|
requests.
|
|
|
|
|
.PP
|
|
|
|
|
See also
|
|
|
|
|
.IP \[bu] 2
|
|
|
|
|
Google+ page for general comments
|
|
|
|
|
.RS 2
|
|
|
|
|
.RE
|
|
|
|
|
.PP
|
|
|
|
|
Or email Nick Craig\-Wood (mailto:nick@craig-wood.com)
|
|
|
|
|
.SH AUTHORS
|
|
|
|
|
Nick Craig\-Wood.
|