Version v1.29

This commit is contained in:
Nick Craig-Wood 2016-04-18 18:30:29 +01:00
parent b5cba73cc3
commit 157d7d45f5
7 changed files with 755 additions and 188 deletions

View File

@ -12,7 +12,7 @@
<div id="header"> <div id="header">
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<h2 class="author">Nick Craig-Wood</h2> <h2 class="author">Nick Craig-Wood</h2>
<h3 class="date">Mar 01, 2016</h3> <h3 class="date">Apr 18, 2016</h3>
</div> </div>
<h1 id="rclone">Rclone</h1> <h1 id="rclone">Rclone</h1>
<p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p> <p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p>
@ -51,10 +51,10 @@
<h2 id="install">Install</h2> <h2 id="install">Install</h2>
<p>Rclone is a Go program and comes as a single binary file.</p> <p>Rclone is a Go program and comes as a single binary file.</p>
<p><a href="http://rclone.org/downloads/">Download</a> the relevant binary.</p> <p><a href="http://rclone.org/downloads/">Download</a> the relevant binary.</p>
<p>Or alternatively if you have Go installed use</p> <p>Or alternatively if you have Go 1.5+ installed use</p>
<pre><code>go get github.com/ncw/rclone</code></pre> <pre><code>go get github.com/ncw/rclone</code></pre>
<p>and this will build the binary in <code>$GOPATH/bin</code>. If you have built rclone before then you will want to update its dependencies first with this (remove <code>-f</code> if using go &lt; 1.4)</p> <p>and this will build the binary in <code>$GOPATH/bin</code>. If you have built rclone before then you will want to update its dependencies first with this</p>
<pre><code>go get -u -v -f github.com/ncw/rclone/...</code></pre> <pre><code>go get -u -v github.com/ncw/rclone/...</code></pre>
<p>See the <a href="http://rclone.org/docs/">Usage section</a> of the docs for how to use rclone, or run <code>rclone -h</code>.</p> <p>See the <a href="http://rclone.org/docs/">Usage section</a> of the docs for how to use rclone, or run <code>rclone -h</code>.</p>
<h2 id="linux-binary-downloaded-files-install-example">linux binary downloaded files install example</h2> <h2 id="linux-binary-downloaded-files-install-example">linux binary downloaded files install example</h2>
<pre><code>unzip rclone-v1.17-linux-amd64.zip <pre><code>unzip rclone-v1.17-linux-amd64.zip
@ -148,36 +148,63 @@ rclone --dry-run --min-size 100M delete remote:path</code></pre>
<p>That reads &quot;delete everything with a minimum size of 100 MB&quot;, hence delete all files bigger than 100MBytes.</p> <p>That reads &quot;delete everything with a minimum size of 100 MB&quot;, hence delete all files bigger than 100MBytes.</p>
<h3 id="rclone-check-sourcepath-destpath">rclone check source:path dest:path</h3> <h3 id="rclone-check-sourcepath-destpath">rclone check source:path dest:path</h3>
<p>Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.</p> <p>Checks the files in the source and destination match. It compares sizes and MD5SUMs and prints a report of files which don't match. It doesn't alter the source or destination.</p>
<p><code>--size-only</code> may be used to only compare the sizes, not the MD5SUMs.</p>
<h3 id="rclone-dedupe-remotepath">rclone dedupe remote:path</h3> <h3 id="rclone-dedupe-remotepath">rclone dedupe remote:path</h3>
<p>Interactively find duplicate files and offer to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.</p> <p>By default <code>dedup</code> interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.</p>
<p>The <code>dedupe</code> command will delete all but one of any identical (same md5sum) files it finds without confirmation. This means that for most duplicated files the <code>dedupe</code> command will not be interactive. You can use <code>--dry-run</code> to see what would happen without doing anything.</p>
<p>Here is an example run.</p>
<p>Before - with duplicates</p>
<pre><code>$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt</code></pre>
<p>Now the <code>dedupe</code> session</p>
<pre><code>$ rclone dedupe drive:dupes <pre><code>$ rclone dedupe drive:dupes
2016/01/31 14:13:11 Google drive root &#39;dupes&#39;: Looking for duplicates 2016/03/05 16:24:37 Google drive root &#39;dupes&#39;: Looking for duplicates using interactive mode.
two.txt: Found 3 duplicates one.txt: Found 4 duplicates - deleting identical copies
1: 564374 bytes, 2016-01-31 14:07:22.159000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 one.txt: Deleting 2/3 identical duplicates (md5sum &quot;1eedaa9fe86fd4b8632e2ac549403b36&quot;)
2: 1744073 bytes, 2016-01-31 14:07:12.490000000, md5sum 851957f7fb6f0bc4ce76be966d336802 one.txt: 2 duplicates remain
3: 6048320 bytes, 2016-01-31 14:07:02.111000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r&gt; k
Enter the number of the file to keep&gt; 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 duplicates - deleting identical copies
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing s) Skip and do nothing
k) Keep just one (choose which in next step) k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg) r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r&gt; r s/k/r&gt; r
two-1.txt: renamed from: two.txt two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt two-3.txt: renamed from: two.txt</code></pre>
one.txt: Found 2 duplicates
1: 6579 bytes, 2016-01-31 14:05:01.235000000, md5sum 2b76c776249409d925ae7ccd49aea59b
2: 6579 bytes, 2016-01-31 12:50:30.318000000, md5sum 2b76c776249409d925ae7ccd49aea59b
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r&gt; k
Enter the number of the file to keep&gt; 2
one.txt: Deleted 1 extra copies</code></pre>
<p>The result being</p> <p>The result being</p>
<pre><code>$ rclone lsl drive:dupes <pre><code>$ rclone lsl drive:dupes
564374 2016-01-31 14:07:22.159000000 two-1.txt 6048320 2016-03-05 16:23:16.798000000 one.txt
1744073 2016-01-31 14:07:12.490000000 two-2.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-01-31 14:07:02.111000000 two-3.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt
6579 2016-01-31 12:50:30.318000000 one.txt</code></pre> 1744073 2016-03-05 16:22:38.104000000 two-3.txt</code></pre>
<p>Dedupe can be run non interactively using the <code>--dedupe-mode</code> flag.</p>
<ul>
<li><code>--dedupe-mode interactive</code> - interactive as above.</li>
<li><code>--dedupe-mode skip</code> - removes identical files then skips anything left.</li>
<li><code>--dedupe-mode first</code> - removes identical files then keeps the first one.</li>
<li><code>--dedupe-mode newest</code> - removes identical files then keeps the newest one.</li>
<li><code>--dedupe-mode oldest</code> - removes identical files then keeps the oldest one.</li>
<li><code>--dedupe-mode rename</code> - removes identical files then renames the rest to be different.</li>
</ul>
<p>For example to rename all the identically named photos in your Google Photos directory, do</p>
<pre><code>rclone dedupe --dedupe-mode rename &quot;drive:Google Photos&quot;</code></pre>
<h3 id="rclone-config">rclone config</h3> <h3 id="rclone-config">rclone config</h3>
<p>Enter an interactive configuration session.</p> <p>Enter an interactive configuration session.</p>
<h3 id="rclone-help">rclone help</h3> <h3 id="rclone-help">rclone help</h3>
@ -216,11 +243,16 @@ rclone sync /path/to/files remote:current-backup</code></pre>
<h3 id="contimeouttime">--contimeout=TIME</h3> <h3 id="contimeouttime">--contimeout=TIME</h3>
<p>Set the connection timeout. This should be in go time format which looks like <code>5s</code> for 5 seconds, <code>10m</code> for 10 minutes, or <code>3h30m</code>.</p> <p>Set the connection timeout. This should be in go time format which looks like <code>5s</code> for 5 seconds, <code>10m</code> for 10 minutes, or <code>3h30m</code>.</p>
<p>The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is <code>1m</code> by default.</p> <p>The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is <code>1m</code> by default.</p>
<h3 id="dedupe-mode-mode">--dedupe-mode MODE</h3>
<p>Mode to run dedupe command in. One of <code>interactive</code>, <code>skip</code>, <code>first</code>, <code>newest</code>, <code>oldest</code>, <code>rename</code>. The default is <code>interactive</code>. See the dedupe command for more information as to what these options mean.</p>
<h3 id="n---dry-run">-n, --dry-run</h3> <h3 id="n---dry-run">-n, --dry-run</h3>
<p>Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the <code>sync</code> command which deletes files in the destination.</p> <p>Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the <code>sync</code> command which deletes files in the destination.</p>
<h3 id="ignore-existing">--ignore-existing</h3> <h3 id="ignore-existing">--ignore-existing</h3>
<p>Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.</p> <p>Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.</p>
<p>While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.</p> <p>While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.</p>
<h3 id="i---ignore-times">-I, --ignore-times</h3>
<p>Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.</p>
<p>Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using <code>--checksum</code>).</p>
<h3 id="log-filefile">--log-file=FILE</h3> <h3 id="log-filefile">--log-file=FILE</h3>
<p>Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the <code>-v</code> flag.</p> <p>Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the <code>-v</code> flag.</p>
<h3 id="low-level-retries-number">--low-level-retries NUMBER</h3> <h3 id="low-level-retries-number">--low-level-retries NUMBER</h3>
@ -406,6 +438,7 @@ y/e/d&gt;</code></pre>
/file.jpg - matches &quot;file.jpg&quot; in the root directory of the remote /file.jpg - matches &quot;file.jpg&quot; in the root directory of the remote
- doesn&#39;t match &quot;afile.jpg&quot; - doesn&#39;t match &quot;afile.jpg&quot;
- doesn&#39;t match &quot;directory/file.jpg&quot;</code></pre> - doesn&#39;t match &quot;directory/file.jpg&quot;</code></pre>
<p><strong>Important</strong> Note that you must use <code>/</code> in patterns and not <code>\</code> even if running on Windows.</p>
<p>A <code>*</code> matches anything but not a <code>/</code>.</p> <p>A <code>*</code> matches anything but not a <code>/</code>.</p>
<pre><code>*.jpg - matches &quot;file.jpg&quot; <pre><code>*.jpg - matches &quot;file.jpg&quot;
- matches &quot;directory/file.jpg&quot; - matches &quot;directory/file.jpg&quot;
@ -626,7 +659,7 @@ file2.jpg</code></pre>
<tr class="odd"> <tr class="odd">
<td align="left">Backblaze B2</td> <td align="left">Backblaze B2</td>
<td align="center">SHA1</td> <td align="center">SHA1</td>
<td align="center">Partial</td> <td align="center">Yes</td>
<td align="center">No</td> <td align="center">No</td>
<td align="center">No</td> <td align="center">No</td>
</tr> </tr>
@ -652,7 +685,6 @@ file2.jpg</code></pre>
<h3 id="modtime">ModTime</h3> <h3 id="modtime">ModTime</h3>
<p>The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the <code>--checksum</code> flag.</p> <p>The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the <code>--checksum</code> flag.</p>
<p>All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.</p> <p>All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.</p>
<p>Backblaze B2 preserves file modification times on files uploaded and downloaded, but doesn't use them to decide which objects to sync.</p>
<h3 id="case-insensitive">Case Insensitive</h3> <h3 id="case-insensitive">Case Insensitive</h3>
<p>If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg <code>file.txt</code> and <code>FILE.txt</code>. If a cloud storage system is case insensitive then that isn't possible.</p> <p>If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg <code>file.txt</code> and <code>FILE.txt</code>. If a cloud storage system is case insensitive then that isn't possible.</p>
<p>This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.</p> <p>This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.</p>
@ -665,7 +697,7 @@ file2.jpg</code></pre>
<p>Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.</p> <p>Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.</p>
<h3 id="duplicate-files">Duplicate files</h3> <h3 id="duplicate-files">Duplicate files</h3>
<p>If a cloud storage system allows duplicate files then it can have two objects with the same name.</p> <p>If a cloud storage system allows duplicate files then it can have two objects with the same name.</p>
<p>This confuses rclone greatly when syncing.</p> <p>This confuses rclone greatly when syncing - use the <code>rclone dedupe</code> command to rename or remove duplicates.</p>
<h2 id="google-drive">Google Drive</h2> <h2 id="google-drive">Google Drive</h2>
<p>Paths are specified as <code>drive:path</code></p> <p>Paths are specified as <code>drive:path</code></p>
<p>Drive paths may be as deep as required, eg <code>drive:directory/subdirectory</code>.</p> <p>Drive paths may be as deep as required, eg <code>drive:directory/subdirectory</code>.</p>
@ -750,11 +782,13 @@ y/e/d&gt; y</code></pre>
<h3 id="specific-options">Specific options</h3> <h3 id="specific-options">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="drive-chunk-sizesize">--drive-chunk-size=SIZE</h4> <h4 id="drive-chunk-sizesize">--drive-chunk-size=SIZE</h4>
<p>Upload chunk size. Must a power of 2 &gt;= 256k. Default value is 256kB.</p> <p>Upload chunk size. Must a power of 2 &gt;= 256k. Default value is 8 MB.</p>
<p>Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.</p>
<p>Reducing this will reduce memory usage but decrease performance.</p>
<h4 id="drive-full-list">--drive-full-list</h4> <h4 id="drive-full-list">--drive-full-list</h4>
<p>Use a full listing for directory list. More data but usually quicker. On by default, disable with <code>--full-drive-list=false</code>.</p> <p>No longer does anything - kept for backwards compatibility.</p>
<h4 id="drive-upload-cutoffsize">--drive-upload-cutoff=SIZE</h4> <h4 id="drive-upload-cutoffsize">--drive-upload-cutoff=SIZE</h4>
<p>File size cutoff for switching to chunked upload. Default is 256kB.</p> <p>File size cutoff for switching to chunked upload. Default is 8 MB.</p>
<h4 id="drive-use-trash">--drive-use-trash</h4> <h4 id="drive-use-trash">--drive-use-trash</h4>
<p>Send files to the trash instead of deleting permanently. Defaults to off, namely deleting files permanently.</p> <p>Send files to the trash instead of deleting permanently. Defaults to off, namely deleting files permanently.</p>
<h4 id="drive-auth-owner-only">--drive-auth-owner-only</h4> <h4 id="drive-auth-owner-only">--drive-auth-owner-only</h4>
@ -1169,6 +1203,8 @@ y/e/d&gt; y</code></pre>
<h3 id="modified-time-2">Modified time</h3> <h3 id="modified-time-2">Modified time</h3>
<p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p> <p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p>
<p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p> <p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p>
<h3 id="limitations-1">Limitations</h3>
<p>The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.</p>
<h2 id="dropbox">Dropbox</h2> <h2 id="dropbox">Dropbox</h2>
<p>Paths are specified as <code>remote:path</code></p> <p>Paths are specified as <code>remote:path</code></p>
<p>Dropbox paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p> <p>Dropbox paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
@ -1244,9 +1280,10 @@ y/e/d&gt; y</code></pre>
<p>Here are the command line options specific to this cloud storage system.</p> <p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="dropbox-chunk-sizesize">--dropbox-chunk-size=SIZE</h4> <h4 id="dropbox-chunk-sizesize">--dropbox-chunk-size=SIZE</h4>
<p>Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.</p> <p>Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.</p>
<h3 id="limitations-1">Limitations</h3> <h3 id="limitations-2">Limitations</h3>
<p>Note that Dropbox is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p> <p>Note that Dropbox is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>There are some file names such as <code>thumbs.db</code> which Dropbox can't store. There is a full list of them in the <a href="https://www.dropbox.com/en/help/145">&quot;Ignored Files&quot; section of this document</a>. Rclone will issue an error message <code>File name disallowed - not uploading</code> if it attempt to upload one of those file names, but the sync won't fail.</p> <p>There are some file names such as <code>thumbs.db</code> which Dropbox can't store. There is a full list of them in the <a href="https://www.dropbox.com/en/help/145">&quot;Ignored Files&quot; section of this document</a>. Rclone will issue an error message <code>File name disallowed - not uploading</code> if it attempt to upload one of those file names, but the sync won't fail.</p>
<p>If you have more than 10,000 files in a directory then <code>rclone purge dropbox:dir</code> will return the error <code>Failed to purge: There are too many files involved in this operation</code>. As a work-around do an <code>rclone delete dropbix:dir</code> followed by an <code>rclone rmdir dropbox:dir</code>.</p>
<h2 id="google-cloud-storage">Google Cloud Storage</h2> <h2 id="google-cloud-storage">Google Cloud Storage</h2>
<p>Paths are specified as <code>remote:bucket</code> (or <code>remote:</code> for the <code>lsd</code> command.) You may put subdirectories in too, eg <code>remote:bucket/path/to/dir</code>.</p> <p>Paths are specified as <code>remote:bucket</code> (or <code>remote:</code> for the <code>lsd</code> command.) You may put subdirectories in too, eg <code>remote:bucket/path/to/dir</code>.</p>
<p>The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. <code>rclone config</code> walks you through it.</p> <p>The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. <code>rclone config</code> walks you through it.</p>
@ -1430,7 +1467,7 @@ y/e/d&gt; y</code></pre>
<h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4> <h4 id="acd-templink-thresholdsize">--acd-templink-threshold=SIZE</h4>
<p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Cloud Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p> <p>Files this size or more will be downloaded via their <code>tempLink</code>. This is to work around a problem with Amazon Cloud Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.</p>
<p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p> <p>To download files above this threshold, rclone requests a <code>tempLink</code> which downloads the file through a temporary URL directly from the underlying S3 storage.</p>
<h3 id="limitations-2">Limitations</h3> <h3 id="limitations-3">Limitations</h3>
<p>Note that Amazon cloud drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p> <p>Note that Amazon cloud drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>Amazon cloud drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see <code>--retries</code> flag) which should hopefully work around this problem.</p> <p>Amazon cloud drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see <code>--retries</code> flag) which should hopefully work around this problem.</p>
<p>Amazon cloud drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.</p> <p>Amazon cloud drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.</p>
@ -1518,7 +1555,7 @@ y/e/d&gt; y</code></pre>
<p>Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.</p> <p>Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.</p>
<h4 id="onedrive-upload-cutoffsize">--onedrive-upload-cutoff=SIZE</h4> <h4 id="onedrive-upload-cutoffsize">--onedrive-upload-cutoff=SIZE</h4>
<p>Cutoff for switching to chunked upload - must be &lt;= 100MB. The default is 10MB.</p> <p>Cutoff for switching to chunked upload - must be &lt;= 100MB. The default is 10MB.</p>
<h3 id="limitations-3">Limitations</h3> <h3 id="limitations-4">Limitations</h3>
<p>Note that One Drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p> <p>Note that One Drive is case insensitive so you can't have a file called &quot;Hello.doc&quot; and one called &quot;hello.doc&quot;.</p>
<p>Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!</p> <p>Rclone only supports your default One Drive, and doesn't work with One Drive for business. Both these issues may be fixed at some point depending on user demand!</p>
<p>There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a <code>?</code> in it will be mapped to <code></code> instead.</p> <p>There are quite a few characters that can't be in One Drive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a <code>?</code> in it will be mapped to <code></code> instead.</p>
@ -1596,8 +1633,9 @@ y/e/d&gt; y</code></pre>
<p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p> <p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p>
<p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p> <p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p>
<p>Note that Hubic wraps the Swift backend, so most of the properties of are the same.</p> <p>Note that Hubic wraps the Swift backend, so most of the properties of are the same.</p>
<h3 id="limitations-4">Limitations</h3> <h3 id="limitations-5">Limitations</h3>
<p>Code to refresh the OpenStack token isn't done yet which may cause problems with very long transfers.</p> <p>This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.</p>
<p>The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.</p>
<h2 id="backblaze-b2">Backblaze B2</h2> <h2 id="backblaze-b2">Backblaze B2</h2>
<p>B2 is <a href="https://www.backblaze.com/b2/">Backblaze's cloud storage system</a>.</p> <p>B2 is <a href="https://www.backblaze.com/b2/">Backblaze's cloud storage system</a>.</p>
<p>Paths are specified as <code>remote:bucket</code> (or <code>remote:</code> for the <code>lsd</code> command.) You may put subdirectories in too, eg <code>remote:bucket/path/to/dir</code>.</p> <p>Paths are specified as <code>remote:bucket</code> (or <code>remote:</code> for the <code>lsd</code> command.) You may put subdirectories in too, eg <code>remote:bucket/path/to/dir</code>.</p>
@ -1662,15 +1700,17 @@ y/e/d&gt; y</code></pre>
<pre><code>rclone sync /home/local/directory remote:bucket</code></pre> <pre><code>rclone sync /home/local/directory remote:bucket</code></pre>
<h3 id="modified-time-5">Modified time</h3> <h3 id="modified-time-5">Modified time</h3>
<p>The modified time is stored as metadata on the object as <code>X-Bz-Info-src_last_modified_millis</code> as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.</p> <p>The modified time is stored as metadata on the object as <code>X-Bz-Info-src_last_modified_millis</code> as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.</p>
<p>Modified times are set on upload, read on download and shown in listings. They are not used in syncing as unfortunately B2 doesn't have an API method to set them independently of doing an upload.</p> <p>Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.</p>
<h3 id="sha1-checksums">SHA1 checksums</h3> <h3 id="sha1-checksums">SHA1 checksums</h3>
<p>The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. You can use the <code>--checksum</code> flag.</p> <p>The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. You can use the <code>--checksum</code> flag.</p>
<h3 id="versions">Versions</h3> <h3 id="versions">Versions</h3>
<p>When rclone uploads a new version of a file it creates a <a href="https://www.backblaze.com/b2/docs/file_versions.html">new version of it</a>. Likewise when you delete a file, the old version will still be available.</p> <p>When rclone uploads a new version of a file it creates a <a href="https://www.backblaze.com/b2/docs/file_versions.html">new version of it</a>. Likewise when you delete a file, the old version will still be available.</p>
<p>The old versions of files are visible in the B2 web interface, but not via rclone yet.</p> <p>The old versions of files are visible in the B2 web interface, but not via rclone yet.</p>
<p>Rclone doesn't provide any way of managing old versions (downloading them or deleting them) at the moment. When you <code>purge</code> a bucket, all the old versions will be deleted.</p> <p>Rclone doesn't provide any way of managing old versions (downloading them or deleting them) at the moment. When you <code>purge</code> a bucket, all the old versions will be deleted.</p>
<h3 id="transfers">Transfers</h3>
<p>Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about <code>--transfers 32</code> though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of <code>--transfers 4</code> is definitely too low for Backblaze B2 though.</p>
<h3 id="api">API</h3> <h3 id="api">API</h3>
<p>Here are <a href="https://gist.github.com/ncw/166dabf352b399f1cc1c">some notes I made on the backblaze API</a> while integrating it with rclone which detail the changes I'd like to see. With a couple of small tweaks Backblaze could enable rclone to not make a temporary copy of files when doing cloud to cloud copies and fully support modification times.</p> <p>Here are <a href="https://gist.github.com/ncw/166dabf352b399f1cc1c">some notes I made on the backblaze API</a> while integrating it with rclone which detail the changes I'd like to see.</p>
<h2 id="yandex-disk">Yandex Disk</h2> <h2 id="yandex-disk">Yandex Disk</h2>
<p><a href="https://disk.yandex.com">Yandex Disk</a> is a cloud storage solution created by <a href="http://yandex.com">Yandex</a>.</p> <p><a href="https://disk.yandex.com">Yandex Disk</a> is a cloud storage solution created by <a href="http://yandex.com">Yandex</a>.</p>
<p>Yandex paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p> <p>Yandex paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
@ -1774,6 +1814,46 @@ nounc = true</code></pre>
<p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p> <p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p>
<h2 id="changelog">Changelog</h2> <h2 id="changelog">Changelog</h2>
<ul> <ul>
<li>v1.29 - 2016-04-18
<ul>
<li>New Features</li>
<li>Implement <code>-I, --ignore-times</code> for unconditional upload</li>
<li>Improve <code>dedupe</code>command
<ul>
<li>Now removes identical copies without asking</li>
<li>Now obeys <code>--dry-run</code></li>
<li>Implement <code>--dedupe-mode</code> for non interactive running</li>
<li><code>--dedupe-mode interactive</code> - interactive the default.</li>
<li><code>--dedupe-mode skip</code> - removes identical files then skips anything left.</li>
<li><code>--dedupe-mode first</code> - removes identical files then keeps the first one.</li>
<li><code>--dedupe-mode newest</code> - removes identical files then keeps the newest one.</li>
<li><code>--dedupe-mode oldest</code> - removes identical files then keeps the oldest one.</li>
<li><code>--dedupe-mode rename</code> - removes identical files then renames the rest to be different.</li>
</ul></li>
<li>Bug fixes</li>
<li>Make rclone check obey the <code>--size-only</code> flag.</li>
<li>Use &quot;application/octet-stream&quot; if discovered mime type is invalid.</li>
<li>Fix missing &quot;quit&quot; option when there are no remotes.</li>
<li>Google Drive</li>
<li>Increase default chunk size to 8 MB - increases upload speed of big files</li>
<li>Speed up directory listings and make more reliable</li>
<li>Add missing retries for Move and DirMove - increases reliability</li>
<li>Preserve mime type on file update</li>
<li>Backblaze B2</li>
<li>Enable mod time syncing
<ul>
<li>This means that B2 will now check modification times</li>
<li>It will upload new files to update the modification times</li>
<li>(there isn't an API to just set the mod time.)</li>
<li>If you want the old behaviour use <code>--size-only</code>.</li>
</ul></li>
<li>Update API to new version</li>
<li>Fix parsing of mod time when not in metadata</li>
<li>Swift/Hubic</li>
<li>Don't return an MD5SUM for static large objects</li>
<li>S3</li>
<li>Fix uploading files bigger than 50GB</li>
</ul></li>
<li>v1.28 - 2016-03-01 <li>v1.28 - 2016-03-01
<ul> <ul>
<li>New Features</li> <li>New Features</li>
@ -2204,6 +2284,8 @@ ntpclient -s -h pool.ntp.org</code></pre>
<h3 id="rclone-gives-failed-to-load-config-file-function-not-implemented-error">Rclone gives Failed to load config file: function not implemented error</h3> <h3 id="rclone-gives-failed-to-load-config-file-function-not-implemented-error">Rclone gives Failed to load config file: function not implemented error</h3>
<p>Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23.</p> <p>Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23.</p>
<p>See the <a href="https://golang.org/doc/install">system requirements section in the go install docs</a> for full details.</p> <p>See the <a href="https://golang.org/doc/install">system requirements section in the go install docs</a> for full details.</p>
<h3 id="all-my-uploaded-docxxlsxpptx-files-appear-as-archivezip">All my uploaded docx/xlsx/pptx files appear as archive/zip</h3>
<p>This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats</p>
<h2 id="license">License</h2> <h2 id="license">License</h2>
<p>This is free software under the terms of MIT the license (check the COPYING file included with the source code).</p> <p>This is free software under the terms of MIT the license (check the COPYING file included with the source code).</p>
<pre><code>Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/ <pre><code>Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/

208
MANUAL.md
View File

@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Mar 01, 2016 % Apr 18, 2016
Rclone Rclone
====== ======
@ -45,15 +45,15 @@ Rclone is a Go program and comes as a single binary file.
[Download](http://rclone.org/downloads/) the relevant binary. [Download](http://rclone.org/downloads/) the relevant binary.
Or alternatively if you have Go installed use Or alternatively if you have Go 1.5+ installed use
go get github.com/ncw/rclone go get github.com/ncw/rclone
and this will build the binary in `$GOPATH/bin`. If you have built and this will build the binary in `$GOPATH/bin`. If you have built
rclone before then you will want to update its dependencies first with rclone before then you will want to update its dependencies first with
this (remove `-f` if using go < 1.4) this
go get -u -v -f github.com/ncw/rclone/... go get -u -v github.com/ncw/rclone/...
See the [Usage section](http://rclone.org/docs/) of the docs for how to use rclone, or See the [Usage section](http://rclone.org/docs/) of the docs for how to use rclone, or
run `rclone -h`. run `rclone -h`.
@ -260,19 +260,55 @@ Checks the files in the source and destination match. It
compares sizes and MD5SUMs and prints a report of files which compares sizes and MD5SUMs and prints a report of files which
don't match. It doesn't alter the source or destination. don't match. It doesn't alter the source or destination.
`--size-only` may be used to only compare the sizes, not the MD5SUMs.
### rclone dedupe remote:path ### ### rclone dedupe remote:path ###
Interactively find duplicate files and offer to delete all but one or By default `dedup` interactively finds duplicate files and offers to
rename them to be different. Only useful with Google Drive which can delete all but one or rename them to be different. Only useful with
have duplicate file names. Google Drive which can have duplicate file names.
The `dedupe` command will delete all but one of any identical (same
md5sum) files it finds without confirmation. This means that for most
duplicated files the `dedupe` command will not be interactive. You
can use `--dry-run` to see what would happen without doing anything.
Here is an example run.
Before - with duplicates
```
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
```
Now the `dedupe` session
``` ```
$ rclone dedupe drive:dupes $ rclone dedupe drive:dupes
2016/01/31 14:13:11 Google drive root 'dupes': Looking for duplicates 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
two.txt: Found 3 duplicates one.txt: Found 4 duplicates - deleting identical copies
1: 564374 bytes, 2016-01-31 14:07:22.159000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
2: 1744073 bytes, 2016-01-31 14:07:12.490000000, md5sum 851957f7fb6f0bc4ce76be966d336802 one.txt: 2 duplicates remain
3: 6048320 bytes, 2016-01-31 14:07:02.111000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 duplicates - deleting identical copies
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing s) Skip and do nothing
k) Keep just one (choose which in next step) k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg) r) Rename all to be different (by changing file.jpg to file-1.jpg)
@ -280,27 +316,31 @@ s/k/r> r
two-1.txt: renamed from: two.txt two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt two-3.txt: renamed from: two.txt
one.txt: Found 2 duplicates
1: 6579 bytes, 2016-01-31 14:05:01.235000000, md5sum 2b76c776249409d925ae7ccd49aea59b
2: 6579 bytes, 2016-01-31 12:50:30.318000000, md5sum 2b76c776249409d925ae7ccd49aea59b
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 2
one.txt: Deleted 1 extra copies
``` ```
The result being The result being
``` ```
$ rclone lsl drive:dupes $ rclone lsl drive:dupes
564374 2016-01-31 14:07:22.159000000 two-1.txt 6048320 2016-03-05 16:23:16.798000000 one.txt
1744073 2016-01-31 14:07:12.490000000 two-2.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-01-31 14:07:02.111000000 two-3.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt
6579 2016-01-31 12:50:30.318000000 one.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt
``` ```
Dedupe can be run non interactively using the `--dedupe-mode` flag.
* `--dedupe-mode interactive` - interactive as above.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
For example to rename all the identically named photos in your Google Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
### rclone config ### ### rclone config ###
Enter an interactive configuration session. Enter an interactive configuration session.
@ -410,6 +450,10 @@ The connection timeout is the amount of time rclone will wait for a
connection to go through to a remote object storage system. It is connection to go through to a remote object storage system. It is
`1m` by default. `1m` by default.
### --dedupe-mode MODE ###
Mode to run dedupe command in. One of `interactive`, `skip`, `first`, `newest`, `oldest`, `rename`. The default is `interactive`. See the dedupe command for more information as to what these options mean.
### -n, --dry-run ### ### -n, --dry-run ###
Do a trial run with no permanent changes. Use this to see what rclone Do a trial run with no permanent changes. Use this to see what rclone
@ -425,6 +469,15 @@ While this isn't a generally recommended option, it can be useful
in cases where your files change due to encryption. However, it cannot in cases where your files change due to encryption. However, it cannot
correct partial transfers in case a transfer was interrupted. correct partial transfers in case a transfer was interrupted.
### -I, --ignore-times ###
Using this option will cause rclone to unconditionally upload all
files regardless of the state of files on the destination.
Normally rclone would skip any files that have the same
modification time and are the same size (or have the same checksum if
using `--checksum`).
### --log-file=FILE ### ### --log-file=FILE ###
Log all of rclone's output to FILE. This is not active by default. Log all of rclone's output to FILE. This is not active by default.
@ -824,6 +877,9 @@ If it doesn't start with `/` then it is matched starting at the
- doesn't match "afile.jpg" - doesn't match "afile.jpg"
- doesn't match "directory/file.jpg" - doesn't match "directory/file.jpg"
**Important** Note that you must use `/` in patterns and not `\` even
if running on Windows.
A `*` matches anything but not a `/`. A `*` matches anything but not a `/`.
*.jpg - matches "file.jpg" *.jpg - matches "file.jpg"
@ -1113,7 +1169,7 @@ Here is an overview of the major features of each cloud storage system.
| Amazon Cloud Drive | MD5 | No | Yes | No | | Amazon Cloud Drive | MD5 | No | Yes | No |
| Microsoft One Drive | SHA1 | Yes | Yes | No | | Microsoft One Drive | SHA1 | Yes | Yes | No |
| Hubic | MD5 | Yes | No | No | | Hubic | MD5 | Yes | No | No |
| Backblaze B2 | SHA1 | Partial | No | No | | Backblaze B2 | SHA1 | Yes | No | No |
| Yandex Disk | MD5 | Yes | No | No | | Yandex Disk | MD5 | Yes | No | No |
| The local filesystem | All | Yes | Depends | No | | The local filesystem | All | Yes | Depends | No |
@ -1137,9 +1193,6 @@ default, though the MD5SUM can be checked with the `--checksum` flag.
All cloud storage systems support some kind of date on the object and All cloud storage systems support some kind of date on the object and
these will be set when transferring from the cloud storage system. these will be set when transferring from the cloud storage system.
Backblaze B2 preserves file modification times on files uploaded and
downloaded, but doesn't use them to decide which objects to sync.
### Case Insensitive ### ### Case Insensitive ###
If a cloud storage systems is case sensitive then it is possible to If a cloud storage systems is case sensitive then it is possible to
@ -1166,7 +1219,8 @@ systems.
If a cloud storage system allows duplicate files then it can have two If a cloud storage system allows duplicate files then it can have two
objects with the same name. objects with the same name.
This confuses rclone greatly when syncing. This confuses rclone greatly when syncing - use the `rclone dedupe`
command to rename or remove duplicates.
Google Drive Google Drive
----------------------------------------- -----------------------------------------
@ -1293,16 +1347,20 @@ system.
#### --drive-chunk-size=SIZE #### #### --drive-chunk-size=SIZE ####
Upload chunk size. Must a power of 2 >= 256k. Default value is 256kB. Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
Making this larger will improve performance, but note that each chunk
is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
#### --drive-full-list #### #### --drive-full-list ####
Use a full listing for directory list. More data but usually No longer does anything - kept for backwards compatibility.
quicker. On by default, disable with `--full-drive-list=false`.
#### --drive-upload-cutoff=SIZE #### #### --drive-upload-cutoff=SIZE ####
File size cutoff for switching to chunked upload. Default is 256kB. File size cutoff for switching to chunked upload. Default is 8 MB.
#### --drive-use-trash #### #### --drive-use-trash ####
@ -1764,6 +1822,12 @@ ns.
This is a defacto standard (used in the official python-swiftclient This is a defacto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object. amongst others) for storing the modification time for an object.
### Limitations ###
The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these.
Dropbox Dropbox
--------------------------------- ---------------------------------
@ -1883,6 +1947,11 @@ of this document](https://www.dropbox.com/en/help/145). Rclone will
issue an error message `File name disallowed - not uploading` if it issue an error message `File name disallowed - not uploading` if it
attempt to upload one of those file names, but the sync won't fail. attempt to upload one of those file names, but the sync won't fail.
If you have more than 10,000 files in a directory then `rclone purge
dropbox:dir` will return the error `Failed to purge: There are too
many files involved in this operation`. As a work-around do an
`rclone delete dropbix:dir` followed by an `rclone rmdir dropbox:dir`.
Google Cloud Storage Google Cloud Storage
------------------------------------------------- -------------------------------------------------
@ -2424,8 +2493,13 @@ are the same.
### Limitations ### ### Limitations ###
Code to refresh the OpenStack token isn't done yet which may cause This uses the normal OpenStack Swift mechanism to refresh the Swift
problems with very long transfers. API credentials and ignores the expires field returned by the Hubic
API.
The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these.
Backblaze B2 Backblaze B2
---------------------------------------- ----------------------------------------
@ -2519,9 +2593,10 @@ The modified time is stored as metadata on the object as
in the Backblaze standard. Other tools should be able to use this as in the Backblaze standard. Other tools should be able to use this as
a modified time. a modified time.
Modified times are set on upload, read on download and shown in Modified times are used in syncing and are fully supported except in
listings. They are not used in syncing as unfortunately B2 doesn't the case of updating a modification time on an existing object. In
have an API method to set them independently of doing an upload. this case the object will be uploaded again as B2 doesn't have an API
method to set the modification time independent of doing an upload.
### SHA1 checksums ### ### SHA1 checksums ###
@ -2542,14 +2617,21 @@ Rclone doesn't provide any way of managing old versions (downloading
them or deleting them) at the moment. When you `purge` a bucket, all them or deleting them) at the moment. When you `purge` a bucket, all
the old versions will be deleted. the old versions will be deleted.
### Transfers ###
Backblaze recommends that you do lots of transfers simultaneously for
maximum speed. In tests from my SSD equiped laptop the optimum
setting is about `--transfers 32` though higher numbers may be used
for a slight speed improvement. The optimum number for you may vary
depending on your hardware, how big the files are, how much you want
to load your computer, etc. The default of `--transfers 4` is
definitely too low for Backblaze B2 though.
### API ### ### API ###
Here are [some notes I made on the backblaze Here are [some notes I made on the backblaze
API](https://gist.github.com/ncw/166dabf352b399f1cc1c) while API](https://gist.github.com/ncw/166dabf352b399f1cc1c) while
integrating it with rclone which detail the changes I'd like to see. integrating it with rclone which detail the changes I'd like to see.
With a couple of small tweaks Backblaze could enable rclone to not
make a temporary copy of files when doing cloud to cloud copies and
fully support modification times.
Yandex Disk Yandex Disk
---------------------------------------- ----------------------------------------
@ -2733,6 +2815,40 @@ file exceeds 258 characters on z, so only use this option if you have to.
Changelog Changelog
--------- ---------
* v1.29 - 2016-04-18
* New Features
* Implement `-I, --ignore-times` for unconditional upload
* Improve `dedupe`command
* Now removes identical copies without asking
* Now obeys `--dry-run`
* Implement `--dedupe-mode` for non interactive running
* `--dedupe-mode interactive` - interactive the default.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* Bug fixes
* Make rclone check obey the `--size-only` flag.
* Use "application/octet-stream" if discovered mime type is invalid.
* Fix missing "quit" option when there are no remotes.
* Google Drive
* Increase default chunk size to 8 MB - increases upload speed of big files
* Speed up directory listings and make more reliable
* Add missing retries for Move and DirMove - increases reliability
* Preserve mime type on file update
* Backblaze B2
* Enable mod time syncing
* This means that B2 will now check modification times
* It will upload new files to update the modification times
* (there isn't an API to just set the mod time.)
* If you want the old behaviour use `--size-only`.
* Update API to new version
* Fix parsing of mod time when not in metadata
* Swift/Hubic
* Don't return an MD5SUM for static large objects
* S3
* Fix uploading files bigger than 50GB
* v1.28 - 2016-03-01 * v1.28 - 2016-03-01
* New Features * New Features
* Configuration file encryption - thanks Klaus Post * Configuration file encryption - thanks Klaus Post
@ -3178,6 +3294,14 @@ supported by the go runtime, ie earlier than version 2.6.23.
See the [system requirements section in the go install See the [system requirements section in the go install
docs](https://golang.org/doc/install) for full details. docs](https://golang.org/doc/install) for full details.
### All my uploaded docx/xlsx/pptx files appear as archive/zip ###
This is caused by uploading these files from a Windows computer which
hasn't got the Microsoft Office suite installed. The easiest way to
fix is to install the Word viewer and the Microsoft Office
Compatibility Pack for Word, Excel, and PowerPoint 2007 and later
versions' file formats
License License
------- -------

View File

@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Mar 01, 2016 Apr 18, 2016
@ -48,15 +48,14 @@ Rclone is a Go program and comes as a single binary file.
Download the relevant binary. Download the relevant binary.
Or alternatively if you have Go installed use Or alternatively if you have Go 1.5+ installed use
go get github.com/ncw/rclone go get github.com/ncw/rclone
and this will build the binary in $GOPATH/bin. If you have built rclone and this will build the binary in $GOPATH/bin. If you have built rclone
before then you will want to update its dependencies first with this before then you will want to update its dependencies first with this
(remove -f if using go < 1.4)
go get -u -v -f github.com/ncw/rclone/... go get -u -v github.com/ncw/rclone/...
See the Usage section of the docs for how to use rclone, or run See the Usage section of the docs for how to use rclone, or run
rclone -h. rclone -h.
@ -262,18 +261,52 @@ Checks the files in the source and destination match. It compares sizes
and MD5SUMs and prints a report of files which don't match. It doesn't and MD5SUMs and prints a report of files which don't match. It doesn't
alter the source or destination. alter the source or destination.
--size-only may be used to only compare the sizes, not the MD5SUMs.
rclone dedupe remote:path rclone dedupe remote:path
Interactively find duplicate files and offer to delete all but one or By default dedup interactively finds duplicate files and offers to
rename them to be different. Only useful with Google Drive which can delete all but one or rename them to be different. Only useful with
have duplicate file names. Google Drive which can have duplicate file names.
The dedupe command will delete all but one of any identical (same
md5sum) files it finds without confirmation. This means that for most
duplicated files the dedupe command will not be interactive. You can use
--dry-run to see what would happen without doing anything.
Here is an example run.
Before - with duplicates
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
Now the dedupe session
$ rclone dedupe drive:dupes $ rclone dedupe drive:dupes
2016/01/31 14:13:11 Google drive root 'dupes': Looking for duplicates 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
two.txt: Found 3 duplicates one.txt: Found 4 duplicates - deleting identical copies
1: 564374 bytes, 2016-01-31 14:07:22.159000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81 one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
2: 1744073 bytes, 2016-01-31 14:07:12.490000000, md5sum 851957f7fb6f0bc4ce76be966d336802 one.txt: 2 duplicates remain
3: 6048320 bytes, 2016-01-31 14:07:02.111000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 duplicates - deleting identical copies
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing s) Skip and do nothing
k) Keep just one (choose which in next step) k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg) r) Rename all to be different (by changing file.jpg to file-1.jpg)
@ -281,23 +314,33 @@ have duplicate file names.
two-1.txt: renamed from: two.txt two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt two-3.txt: renamed from: two.txt
one.txt: Found 2 duplicates
1: 6579 bytes, 2016-01-31 14:05:01.235000000, md5sum 2b76c776249409d925ae7ccd49aea59b
2: 6579 bytes, 2016-01-31 12:50:30.318000000, md5sum 2b76c776249409d925ae7ccd49aea59b
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 2
one.txt: Deleted 1 extra copies
The result being The result being
$ rclone lsl drive:dupes $ rclone lsl drive:dupes
564374 2016-01-31 14:07:22.159000000 two-1.txt 6048320 2016-03-05 16:23:16.798000000 one.txt
1744073 2016-01-31 14:07:12.490000000 two-2.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-01-31 14:07:02.111000000 two-3.txt 6048320 2016-03-05 16:22:46.185000000 two-2.txt
6579 2016-01-31 12:50:30.318000000 one.txt 1744073 2016-03-05 16:22:38.104000000 two-3.txt
Dedupe can be run non interactively using the --dedupe-mode flag.
- --dedupe-mode interactive - interactive as above.
- --dedupe-mode skip - removes identical files then skips anything
left.
- --dedupe-mode first - removes identical files then keeps the first
one.
- --dedupe-mode newest - removes identical files then keeps the newest
one.
- --dedupe-mode oldest - removes identical files then keeps the oldest
one.
- --dedupe-mode rename - removes identical files then renames the rest
to be different.
For example to rename all the identically named photos in your Google
Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
rclone config rclone config
@ -407,6 +450,12 @@ The connection timeout is the amount of time rclone will wait for a
connection to go through to a remote object storage system. It is 1m by connection to go through to a remote object storage system. It is 1m by
default. default.
--dedupe-mode MODE
Mode to run dedupe command in. One of interactive, skip, first, newest,
oldest, rename. The default is interactive. See the dedupe command for
more information as to what these options mean.
-n, --dry-run -n, --dry-run
Do a trial run with no permanent changes. Use this to see what rclone Do a trial run with no permanent changes. Use this to see what rclone
@ -422,6 +471,15 @@ While this isn't a generally recommended option, it can be useful in
cases where your files change due to encryption. However, it cannot cases where your files change due to encryption. However, it cannot
correct partial transfers in case a transfer was interrupted. correct partial transfers in case a transfer was interrupted.
-I, --ignore-times
Using this option will cause rclone to unconditionally upload all files
regardless of the state of files on the destination.
Normally rclone would skip any files that have the same modification
time and are the same size (or have the same checksum if using
--checksum).
--log-file=FILE --log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This Log all of rclone's output to FILE. This is not active by default. This
@ -814,6 +872,9 @@ will only match a complete path element:
- doesn't match "afile.jpg" - doesn't match "afile.jpg"
- doesn't match "directory/file.jpg" - doesn't match "directory/file.jpg"
IMPORTANT Note that you must use / in patterns and not \ even if running
on Windows.
A * matches anything but not a /. A * matches anything but not a /.
*.jpg - matches "file.jpg" *.jpg - matches "file.jpg"
@ -1107,7 +1168,7 @@ Here is an overview of the major features of each cloud storage system.
Amazon Cloud Drive MD5 No Yes No Amazon Cloud Drive MD5 No Yes No
Microsoft One Drive SHA1 Yes Yes No Microsoft One Drive SHA1 Yes Yes No
Hubic MD5 Yes No No Hubic MD5 Yes No No
Backblaze B2 SHA1 Partial No No Backblaze B2 SHA1 Yes No No
Yandex Disk MD5 Yes No No Yandex Disk MD5 Yes No No
The local filesystem All Yes Depends No The local filesystem All Yes Depends No
@ -1131,9 +1192,6 @@ the MD5SUM can be checked with the --checksum flag.
All cloud storage systems support some kind of date on the object and All cloud storage systems support some kind of date on the object and
these will be set when transferring from the cloud storage system. these will be set when transferring from the cloud storage system.
Backblaze B2 preserves file modification times on files uploaded and
downloaded, but doesn't use them to decide which objects to sync.
Case Insensitive Case Insensitive
If a cloud storage systems is case sensitive then it is possible to have If a cloud storage systems is case sensitive then it is possible to have
@ -1160,7 +1218,8 @@ Duplicate files
If a cloud storage system allows duplicate files then it can have two If a cloud storage system allows duplicate files then it can have two
objects with the same name. objects with the same name.
This confuses rclone greatly when syncing. This confuses rclone greatly when syncing - use the rclone dedupe
command to rename or remove duplicates.
Google Drive Google Drive
@ -1284,16 +1343,20 @@ Here are the command line options specific to this cloud storage system.
--drive-chunk-size=SIZE --drive-chunk-size=SIZE
Upload chunk size. Must a power of 2 >= 256k. Default value is 256kB. Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
Making this larger will improve performance, but note that each chunk is
buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
--drive-full-list --drive-full-list
Use a full listing for directory list. More data but usually quicker. On No longer does anything - kept for backwards compatibility.
by default, disable with --full-drive-list=false.
--drive-upload-cutoff=SIZE --drive-upload-cutoff=SIZE
File size cutoff for switching to chunked upload. Default is 256kB. File size cutoff for switching to chunked upload. Default is 8 MB.
--drive-use-trash --drive-use-trash
@ -1745,6 +1808,12 @@ X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
This is a defacto standard (used in the official python-swiftclient This is a defacto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object. amongst others) for storing the modification time for an object.
Limitations
The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these.
Dropbox Dropbox
@ -1861,6 +1930,12 @@ document. Rclone will issue an error message
File name disallowed - not uploading if it attempt to upload one of File name disallowed - not uploading if it attempt to upload one of
those file names, but the sync won't fail. those file names, but the sync won't fail.
If you have more than 10,000 files in a directory then
rclone purge dropbox:dir will return the error
Failed to purge: There are too many files involved in this operation. As
a work-around do an rclone delete dropbix:dir followed by an
rclone rmdir dropbox:dir.
Google Cloud Storage Google Cloud Storage
@ -2392,8 +2467,12 @@ are the same.
Limitations Limitations
Code to refresh the OpenStack token isn't done yet which may cause This uses the normal OpenStack Swift mechanism to refresh the Swift API
problems with very long transfers. credentials and ignores the expires field returned by the Hubic API.
The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these.
Backblaze B2 Backblaze B2
@ -2485,9 +2564,10 @@ X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in
the Backblaze standard. Other tools should be able to use this as a the Backblaze standard. Other tools should be able to use this as a
modified time. modified time.
Modified times are set on upload, read on download and shown in Modified times are used in syncing and are fully supported except in the
listings. They are not used in syncing as unfortunately B2 doesn't have case of updating a modification time on an existing object. In this case
an API method to set them independently of doing an upload. the object will be uploaded again as B2 doesn't have an API method to
set the modification time independent of doing an upload.
SHA1 checksums SHA1 checksums
@ -2507,13 +2587,20 @@ Rclone doesn't provide any way of managing old versions (downloading
them or deleting them) at the moment. When you purge a bucket, all the them or deleting them) at the moment. When you purge a bucket, all the
old versions will be deleted. old versions will be deleted.
Transfers
Backblaze recommends that you do lots of transfers simultaneously for
maximum speed. In tests from my SSD equiped laptop the optimum setting
is about --transfers 32 though higher numbers may be used for a slight
speed improvement. The optimum number for you may vary depending on your
hardware, how big the files are, how much you want to load your
computer, etc. The default of --transfers 4 is definitely too low for
Backblaze B2 though.
API API
Here are some notes I made on the backblaze API while integrating it Here are some notes I made on the backblaze API while integrating it
with rclone which detail the changes I'd like to see. With a couple of with rclone which detail the changes I'd like to see.
small tweaks Backblaze could enable rclone to not make a temporary copy
of files when doing cloud to cloud copies and fully support modification
times.
Yandex Disk Yandex Disk
@ -2692,6 +2779,47 @@ characters on z, so only use this option if you have to.
Changelog Changelog
- v1.29 - 2016-04-18
- New Features
- Implement -I, --ignore-times for unconditional upload
- Improve dedupecommand
- Now removes identical copies without asking
- Now obeys --dry-run
- Implement --dedupe-mode for non interactive running
- --dedupe-mode interactive - interactive the default.
- --dedupe-mode skip - removes identical files then skips
anything left.
- --dedupe-mode first - removes identical files then keeps the
first one.
- --dedupe-mode newest - removes identical files then keeps
the newest one.
- --dedupe-mode oldest - removes identical files then keeps
the oldest one.
- --dedupe-mode rename - removes identical files then renames
the rest to be different.
- Bug fixes
- Make rclone check obey the --size-only flag.
- Use "application/octet-stream" if discovered mime type is
invalid.
- Fix missing "quit" option when there are no remotes.
- Google Drive
- Increase default chunk size to 8 MB - increases upload speed of
big files
- Speed up directory listings and make more reliable
- Add missing retries for Move and DirMove - increases reliability
- Preserve mime type on file update
- Backblaze B2
- Enable mod time syncing
- This means that B2 will now check modification times
- It will upload new files to update the modification times
- (there isn't an API to just set the mod time.)
- If you want the old behaviour use --size-only.
- Update API to new version
- Fix parsing of mod time when not in metadata
- Swift/Hubic
- Don't return an MD5SUM for static large objects
- S3
- Fix uploading files bigger than 50GB
- v1.28 - 2016-03-01 - v1.28 - 2016-03-01
- New Features - New Features
- Configuration file encryption - thanks Klaus Post - Configuration file encryption - thanks Klaus Post
@ -3163,6 +3291,14 @@ supported by the go runtime, ie earlier than version 2.6.23.
See the system requirements section in the go install docs for full See the system requirements section in the go install docs for full
details. details.
All my uploaded docx/xlsx/pptx files appear as archive/zip
This is caused by uploading these files from a Windows computer which
hasn't got the Microsoft Office suite installed. The easiest way to fix
is to install the Word viewer and the Microsoft Office Compatibility
Pack for Word, Excel, and PowerPoint 2007 and later versions' file
formats
License License

View File

@ -1,12 +1,46 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Changelog" description: "Rclone Changelog"
date: "2016-03-01" date: "2016-04-18"
--- ---
Changelog Changelog
--------- ---------
* v1.29 - 2016-04-18
* New Features
* Implement `-I, --ignore-times` for unconditional upload
* Improve `dedupe`command
* Now removes identical copies without asking
* Now obeys `--dry-run`
* Implement `--dedupe-mode` for non interactive running
* `--dedupe-mode interactive` - interactive the default.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* Bug fixes
* Make rclone check obey the `--size-only` flag.
* Use "application/octet-stream" if discovered mime type is invalid.
* Fix missing "quit" option when there are no remotes.
* Google Drive
* Increase default chunk size to 8 MB - increases upload speed of big files
* Speed up directory listings and make more reliable
* Add missing retries for Move and DirMove - increases reliability
* Preserve mime type on file update
* Backblaze B2
* Enable mod time syncing
* This means that B2 will now check modification times
* It will upload new files to update the modification times
* (there isn't an API to just set the mod time.)
* If you want the old behaviour use `--size-only`.
* Update API to new version
* Fix parsing of mod time when not in metadata
* Swift/Hubic
* Don't return an MD5SUM for static large objects
* S3
* Fix uploading files bigger than 50GB
* v1.28 - 2016-03-01 * v1.28 - 2016-03-01
* New Features * New Features
* Configuration file encryption - thanks Klaus Post * Configuration file encryption - thanks Klaus Post

View File

@ -2,40 +2,40 @@
title: "Rclone downloads" title: "Rclone downloads"
description: "Download rclone binaries for your OS." description: "Download rclone binaries for your OS."
type: page type: page
date: "2016-03-01" date: "2016-04-18"
--- ---
Rclone Download v1.28 Rclone Download v1.29
===================== =====================
* Windows * Windows
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.28-windows-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-windows-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-windows-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-windows-amd64.zip)
* OSX * OSX
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.28-osx-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-osx-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-osx-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-osx-amd64.zip)
* Linux * Linux
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.28-linux-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-linux-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-linux-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-linux-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.28-linux-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-linux-arm.zip)
* FreeBSD * FreeBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.28-freebsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-freebsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.28-freebsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-arm.zip)
* NetBSD * NetBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.28-netbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-netbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.28-netbsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-arm.zip)
* OpenBSD * OpenBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.28-openbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-openbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-openbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-openbsd-amd64.zip)
* Plan 9 * Plan 9
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.28-plan9-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-plan9-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-plan9-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-plan9-amd64.zip)
* Solaris * Solaris
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.28-solaris-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-solaris-amd64.zip)
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.28). You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.29).
Downloads for scripting Downloads for scripting
======================= =======================

View File

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
const Version = "v1.28" const Version = "v1.29"

277
rclone.1
View File

@ -1,5 +1,5 @@
.\"t .\"t
.TH "rclone" "1" "Mar 01, 2016" "User Manual" "" .TH "rclone" "1" "Apr 18, 2016" "User Manual" ""
.SH Rclone .SH Rclone
.PP .PP
[IMAGE: Logo (http://rclone.org/img/rclone-120x120.png)] (http://rclone.org/) [IMAGE: Logo (http://rclone.org/img/rclone-120x120.png)] (http://rclone.org/)
@ -63,7 +63,7 @@ Rclone is a Go program and comes as a single binary file.
.PP .PP
Download (http://rclone.org/downloads/) the relevant binary. Download (http://rclone.org/downloads/) the relevant binary.
.PP .PP
Or alternatively if you have Go installed use Or alternatively if you have Go 1.5+ installed use
.IP .IP
.nf .nf
\f[C] \f[C]
@ -73,11 +73,11 @@ go\ get\ github.com/ncw/rclone
.PP .PP
and this will build the binary in \f[C]$GOPATH/bin\f[]. and this will build the binary in \f[C]$GOPATH/bin\f[].
If you have built rclone before then you will want to update its If you have built rclone before then you will want to update its
dependencies first with this (remove \f[C]\-f\f[] if using go < 1.4) dependencies first with this
.IP .IP
.nf .nf
\f[C] \f[C]
go\ get\ \-u\ \-v\ \-f\ github.com/ncw/rclone/... go\ get\ \-u\ \-v\ github.com/ncw/rclone/...
\f[] \f[]
.fi .fi
.PP .PP
@ -320,20 +320,61 @@ Checks the files in the source and destination match.
It compares sizes and MD5SUMs and prints a report of files which It compares sizes and MD5SUMs and prints a report of files which
don\[aq]t match. don\[aq]t match.
It doesn\[aq]t alter the source or destination. It doesn\[aq]t alter the source or destination.
.PP
\f[C]\-\-size\-only\f[] may be used to only compare the sizes, not the
MD5SUMs.
.SS rclone dedupe remote:path .SS rclone dedupe remote:path
.PP .PP
Interactively find duplicate files and offer to delete all but one or By default \f[C]dedup\f[] interactively finds duplicate files and offers
rename them to be different. to delete all but one or rename them to be different.
Only useful with Google Drive which can have duplicate file names. Only useful with Google Drive which can have duplicate file names.
.PP
The \f[C]dedupe\f[] command will delete all but one of any identical
(same md5sum) files it finds without confirmation.
This means that for most duplicated files the \f[C]dedupe\f[] command
will not be interactive.
You can use \f[C]\-\-dry\-run\f[] to see what would happen without doing
anything.
.PP
Here is an example run.
.PP
Before \- with duplicates
.IP
.nf
\f[C]
$\ rclone\ lsl\ drive:dupes
\ \ 6048320\ 2016\-03\-05\ 16:23:16.798000000\ one.txt
\ \ 6048320\ 2016\-03\-05\ 16:23:11.775000000\ one.txt
\ \ \ 564374\ 2016\-03\-05\ 16:23:06.731000000\ one.txt
\ \ 6048320\ 2016\-03\-05\ 16:18:26.092000000\ one.txt
\ \ 6048320\ 2016\-03\-05\ 16:22:46.185000000\ two.txt
\ \ 1744073\ 2016\-03\-05\ 16:22:38.104000000\ two.txt
\ \ \ 564374\ 2016\-03\-05\ 16:22:52.118000000\ two.txt
\f[]
.fi
.PP
Now the \f[C]dedupe\f[] session
.IP .IP
.nf .nf
\f[C] \f[C]
$\ rclone\ dedupe\ drive:dupes $\ rclone\ dedupe\ drive:dupes
2016/01/31\ 14:13:11\ Google\ drive\ root\ \[aq]dupes\[aq]:\ Looking\ for\ duplicates 2016/03/05\ 16:24:37\ Google\ drive\ root\ \[aq]dupes\[aq]:\ Looking\ for\ duplicates\ using\ interactive\ mode.
two.txt:\ Found\ 3\ duplicates one.txt:\ Found\ 4\ duplicates\ \-\ deleting\ identical\ copies
\ \ 1:\ \ \ \ \ \ \ 564374\ bytes,\ 2016\-01\-31\ 14:07:22.159000000,\ md5sum\ 7594e7dc9fc28f727c42ee3e0749de81 one.txt:\ Deleting\ 2/3\ identical\ duplicates\ (md5sum\ "1eedaa9fe86fd4b8632e2ac549403b36")
\ \ 2:\ \ \ \ \ \ 1744073\ bytes,\ 2016\-01\-31\ 14:07:12.490000000,\ md5sum\ 851957f7fb6f0bc4ce76be966d336802 one.txt:\ 2\ duplicates\ remain
\ \ 3:\ \ \ \ \ \ 6048320\ bytes,\ 2016\-01\-31\ 14:07:02.111000000,\ md5sum\ 1eedaa9fe86fd4b8632e2ac549403b36 \ \ 1:\ \ \ \ \ \ 6048320\ bytes,\ 2016\-03\-05\ 16:23:16.798000000,\ md5sum\ 1eedaa9fe86fd4b8632e2ac549403b36
\ \ 2:\ \ \ \ \ \ \ 564374\ bytes,\ 2016\-03\-05\ 16:23:06.731000000,\ md5sum\ 7594e7dc9fc28f727c42ee3e0749de81
s)\ Skip\ and\ do\ nothing
k)\ Keep\ just\ one\ (choose\ which\ in\ next\ step)
r)\ Rename\ all\ to\ be\ different\ (by\ changing\ file.jpg\ to\ file\-1.jpg)
s/k/r>\ k
Enter\ the\ number\ of\ the\ file\ to\ keep>\ 1
one.txt:\ Deleted\ 1\ extra\ copies
two.txt:\ Found\ 3\ duplicates\ \-\ deleting\ identical\ copies
two.txt:\ 3\ duplicates\ remain
\ \ 1:\ \ \ \ \ \ \ 564374\ bytes,\ 2016\-03\-05\ 16:22:52.118000000,\ md5sum\ 7594e7dc9fc28f727c42ee3e0749de81
\ \ 2:\ \ \ \ \ \ 6048320\ bytes,\ 2016\-03\-05\ 16:22:46.185000000,\ md5sum\ 1eedaa9fe86fd4b8632e2ac549403b36
\ \ 3:\ \ \ \ \ \ 1744073\ bytes,\ 2016\-03\-05\ 16:22:38.104000000,\ md5sum\ 851957f7fb6f0bc4ce76be966d336802
s)\ Skip\ and\ do\ nothing s)\ Skip\ and\ do\ nothing
k)\ Keep\ just\ one\ (choose\ which\ in\ next\ step) k)\ Keep\ just\ one\ (choose\ which\ in\ next\ step)
r)\ Rename\ all\ to\ be\ different\ (by\ changing\ file.jpg\ to\ file\-1.jpg) r)\ Rename\ all\ to\ be\ different\ (by\ changing\ file.jpg\ to\ file\-1.jpg)
@ -341,15 +382,6 @@ s/k/r>\ r
two\-1.txt:\ renamed\ from:\ two.txt two\-1.txt:\ renamed\ from:\ two.txt
two\-2.txt:\ renamed\ from:\ two.txt two\-2.txt:\ renamed\ from:\ two.txt
two\-3.txt:\ renamed\ from:\ two.txt two\-3.txt:\ renamed\ from:\ two.txt
one.txt:\ Found\ 2\ duplicates
\ \ 1:\ \ \ \ \ \ \ \ \ 6579\ bytes,\ 2016\-01\-31\ 14:05:01.235000000,\ md5sum\ 2b76c776249409d925ae7ccd49aea59b
\ \ 2:\ \ \ \ \ \ \ \ \ 6579\ bytes,\ 2016\-01\-31\ 12:50:30.318000000,\ md5sum\ 2b76c776249409d925ae7ccd49aea59b
s)\ Skip\ and\ do\ nothing
k)\ Keep\ just\ one\ (choose\ which\ in\ next\ step)
r)\ Rename\ all\ to\ be\ different\ (by\ changing\ file.jpg\ to\ file\-1.jpg)
s/k/r>\ k
Enter\ the\ number\ of\ the\ file\ to\ keep>\ 2
one.txt:\ Deleted\ 1\ extra\ copies
\f[] \f[]
.fi .fi
.PP .PP
@ -358,10 +390,39 @@ The result being
.nf .nf
\f[C] \f[C]
$\ rclone\ lsl\ drive:dupes $\ rclone\ lsl\ drive:dupes
\ \ \ 564374\ 2016\-01\-31\ 14:07:22.159000000\ two\-1.txt \ \ 6048320\ 2016\-03\-05\ 16:23:16.798000000\ one.txt
\ \ 1744073\ 2016\-01\-31\ 14:07:12.490000000\ two\-2.txt \ \ \ 564374\ 2016\-03\-05\ 16:22:52.118000000\ two\-1.txt
\ \ 6048320\ 2016\-01\-31\ 14:07:02.111000000\ two\-3.txt \ \ 6048320\ 2016\-03\-05\ 16:22:46.185000000\ two\-2.txt
\ \ \ \ \ 6579\ 2016\-01\-31\ 12:50:30.318000000\ one.txt \ \ 1744073\ 2016\-03\-05\ 16:22:38.104000000\ two\-3.txt
\f[]
.fi
.PP
Dedupe can be run non interactively using the \f[C]\-\-dedupe\-mode\f[]
flag.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ interactive\f[] \- interactive as above.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ skip\f[] \- removes identical files then skips
anything left.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ first\f[] \- removes identical files then keeps
the first one.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ newest\f[] \- removes identical files then keeps
the newest one.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ oldest\f[] \- removes identical files then keeps
the oldest one.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ rename\f[] \- removes identical files then
renames the rest to be different.
.PP
For example to rename all the identically named photos in your Google
Photos directory, do
.IP
.nf
\f[C]
rclone\ dedupe\ \-\-dedupe\-mode\ rename\ "drive:Google\ Photos"
\f[] \f[]
.fi .fi
.SS rclone config .SS rclone config
@ -478,6 +539,14 @@ seconds, \f[C]10m\f[] for 10 minutes, or \f[C]3h30m\f[].
The connection timeout is the amount of time rclone will wait for a The connection timeout is the amount of time rclone will wait for a
connection to go through to a remote object storage system. connection to go through to a remote object storage system.
It is \f[C]1m\f[] by default. It is \f[C]1m\f[] by default.
.SS \-\-dedupe\-mode MODE
.PP
Mode to run dedupe command in.
One of \f[C]interactive\f[], \f[C]skip\f[], \f[C]first\f[],
\f[C]newest\f[], \f[C]oldest\f[], \f[C]rename\f[].
The default is \f[C]interactive\f[].
See the dedupe command for more information as to what these options
mean.
.SS \-n, \-\-dry\-run .SS \-n, \-\-dry\-run
.PP .PP
Do a trial run with no permanent changes. Do a trial run with no permanent changes.
@ -493,6 +562,14 @@ While this isn\[aq]t a generally recommended option, it can be useful in
cases where your files change due to encryption. cases where your files change due to encryption.
However, it cannot correct partial transfers in case a transfer was However, it cannot correct partial transfers in case a transfer was
interrupted. interrupted.
.SS \-I, \-\-ignore\-times
.PP
Using this option will cause rclone to unconditionally upload all files
regardless of the state of files on the destination.
.PP
Normally rclone would skip any files that have the same modification
time and are the same size (or have the same checksum if using
\f[C]\-\-checksum\f[]).
.SS \-\-log\-file=FILE .SS \-\-log\-file=FILE
.PP .PP
Log all of rclone\[aq]s output to FILE. Log all of rclone\[aq]s output to FILE.
@ -919,6 +996,9 @@ file.jpg\ \ \-\ matches\ "file.jpg"
\f[] \f[]
.fi .fi
.PP .PP
\f[B]Important\f[] Note that you must use \f[C]/\f[] in patterns and not
\f[C]\\\f[] even if running on Windows.
.PP
A \f[C]*\f[] matches anything but not a \f[C]/\f[]. A \f[C]*\f[] matches anything but not a \f[C]/\f[].
.IP .IP
.nf .nf
@ -1376,7 +1456,7 @@ Backblaze B2
T}@T{ T}@T{
SHA1 SHA1
T}@T{ T}@T{
Partial Yes
T}@T{ T}@T{
No No
T}@T{ T}@T{
@ -1427,9 +1507,6 @@ can be checked with the \f[C]\-\-checksum\f[] flag.
.PP .PP
All cloud storage systems support some kind of date on the object and All cloud storage systems support some kind of date on the object and
these will be set when transferring from the cloud storage system. these will be set when transferring from the cloud storage system.
.PP
Backblaze B2 preserves file modification times on files uploaded and
downloaded, but doesn\[aq]t use them to decide which objects to sync.
.SS Case Insensitive .SS Case Insensitive
.PP .PP
If a cloud storage systems is case sensitive then it is possible to have If a cloud storage systems is case sensitive then it is possible to have
@ -1461,7 +1538,8 @@ systems.
If a cloud storage system allows duplicate files then it can have two If a cloud storage system allows duplicate files then it can have two
objects with the same name. objects with the same name.
.PP .PP
This confuses rclone greatly when syncing. This confuses rclone greatly when syncing \- use the
\f[C]rclone\ dedupe\f[] command to rename or remove duplicates.
.SS Google Drive .SS Google Drive
.PP .PP
Paths are specified as \f[C]drive:path\f[] Paths are specified as \f[C]drive:path\f[]
@ -1603,16 +1681,19 @@ Here are the command line options specific to this cloud storage system.
.PP .PP
Upload chunk size. Upload chunk size.
Must a power of 2 >= 256k. Must a power of 2 >= 256k.
Default value is 256kB. Default value is 8 MB.
.PP
Making this larger will improve performance, but note that each chunk is
buffered in memory one per transfer.
.PP
Reducing this will reduce memory usage but decrease performance.
.SS \-\-drive\-full\-list .SS \-\-drive\-full\-list
.PP .PP
Use a full listing for directory list. No longer does anything \- kept for backwards compatibility.
More data but usually quicker.
On by default, disable with \f[C]\-\-full\-drive\-list=false\f[].
.SS \-\-drive\-upload\-cutoff=SIZE .SS \-\-drive\-upload\-cutoff=SIZE
.PP .PP
File size cutoff for switching to chunked upload. File size cutoff for switching to chunked upload.
Default is 256kB. Default is 8 MB.
.SS \-\-drive\-use\-trash .SS \-\-drive\-use\-trash
.PP .PP
Send files to the trash instead of deleting permanently. Send files to the trash instead of deleting permanently.
@ -2249,6 +2330,11 @@ accurate to 1 ns.
.PP .PP
This is a defacto standard (used in the official python\-swiftclient This is a defacto standard (used in the official python\-swiftclient
amongst others) for storing the modification time for an object. amongst others) for storing the modification time for an object.
.SS Limitations
.PP
The Swift API doesn\[aq]t return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the
MD5SUM for these.
.SS Dropbox .SS Dropbox
.PP .PP
Paths are specified as \f[C]remote:path\f[] Paths are specified as \f[C]remote:path\f[]
@ -2386,6 +2472,12 @@ document (https://www.dropbox.com/en/help/145).
Rclone will issue an error message Rclone will issue an error message
\f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempt to \f[C]File\ name\ disallowed\ \-\ not\ uploading\f[] if it attempt to
upload one of those file names, but the sync won\[aq]t fail. upload one of those file names, but the sync won\[aq]t fail.
.PP
If you have more than 10,000 files in a directory then
\f[C]rclone\ purge\ dropbox:dir\f[] will return the error
\f[C]Failed\ to\ purge:\ There\ are\ too\ many\ files\ involved\ in\ this\ operation\f[].
As a work\-around do an \f[C]rclone\ delete\ dropbix:dir\f[] followed by
an \f[C]rclone\ rmdir\ dropbox:dir\f[].
.SS Google Cloud Storage .SS Google Cloud Storage
.PP .PP
Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for
@ -3001,8 +3093,12 @@ Note that Hubic wraps the Swift backend, so most of the properties of
are the same. are the same.
.SS Limitations .SS Limitations
.PP .PP
Code to refresh the OpenStack token isn\[aq]t done yet which may cause This uses the normal OpenStack Swift mechanism to refresh the Swift API
problems with very long transfers. credentials and ignores the expires field returned by the Hubic API.
.PP
The Swift API doesn\[aq]t return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the
MD5SUM for these.
.SS Backblaze B2 .SS Backblaze B2
.PP .PP
B2 is Backblaze\[aq]s cloud storage B2 is Backblaze\[aq]s cloud storage
@ -3118,10 +3214,10 @@ The modified time is stored as metadata on the object as
1970\-01\-01 in the Backblaze standard. 1970\-01\-01 in the Backblaze standard.
Other tools should be able to use this as a modified time. Other tools should be able to use this as a modified time.
.PP .PP
Modified times are set on upload, read on download and shown in Modified times are used in syncing and are fully supported except in the
listings. case of updating a modification time on an existing object.
They are not used in syncing as unfortunately B2 doesn\[aq]t have an API In this case the object will be uploaded again as B2 doesn\[aq]t have an
method to set them independently of doing an upload. API method to set the modification time independent of doing an upload.
.SS SHA1 checksums .SS SHA1 checksums
.PP .PP
The SHA1 checksums of the files are checked on upload and download and The SHA1 checksums of the files are checked on upload and download and
@ -3140,14 +3236,22 @@ via rclone yet.
Rclone doesn\[aq]t provide any way of managing old versions (downloading Rclone doesn\[aq]t provide any way of managing old versions (downloading
them or deleting them) at the moment. them or deleting them) at the moment.
When you \f[C]purge\f[] a bucket, all the old versions will be deleted. When you \f[C]purge\f[] a bucket, all the old versions will be deleted.
.SS Transfers
.PP
Backblaze recommends that you do lots of transfers simultaneously for
maximum speed.
In tests from my SSD equiped laptop the optimum setting is about
\f[C]\-\-transfers\ 32\f[] though higher numbers may be used for a
slight speed improvement.
The optimum number for you may vary depending on your hardware, how big
the files are, how much you want to load your computer, etc.
The default of \f[C]\-\-transfers\ 4\f[] is definitely too low for
Backblaze B2 though.
.SS API .SS API
.PP .PP
Here are some notes I made on the backblaze Here are some notes I made on the backblaze
API (https://gist.github.com/ncw/166dabf352b399f1cc1c) while integrating API (https://gist.github.com/ncw/166dabf352b399f1cc1c) while integrating
it with rclone which detail the changes I\[aq]d like to see. it with rclone which detail the changes I\[aq]d like to see.
With a couple of small tweaks Backblaze could enable rclone to not make
a temporary copy of files when doing cloud to cloud copies and fully
support modification times.
.SS Yandex Disk .SS Yandex Disk
.PP .PP
Yandex Disk (https://disk.yandex.com) is a cloud storage solution Yandex Disk (https://disk.yandex.com) is a cloud storage solution
@ -3365,6 +3469,86 @@ Of course this will cause problems if the absolute path length of a file
exceeds 258 characters on z, so only use this option if you have to. exceeds 258 characters on z, so only use this option if you have to.
.SS Changelog .SS Changelog
.IP \[bu] 2 .IP \[bu] 2
v1.29 \- 2016\-04\-18
.RS 2
.IP \[bu] 2
New Features
.IP \[bu] 2
Implement \f[C]\-I,\ \-\-ignore\-times\f[] for unconditional upload
.IP \[bu] 2
Improve \f[C]dedupe\f[]command
.RS 2
.IP \[bu] 2
Now removes identical copies without asking
.IP \[bu] 2
Now obeys \f[C]\-\-dry\-run\f[]
.IP \[bu] 2
Implement \f[C]\-\-dedupe\-mode\f[] for non interactive running
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ interactive\f[] \- interactive the default.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ skip\f[] \- removes identical files then skips
anything left.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ first\f[] \- removes identical files then keeps
the first one.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ newest\f[] \- removes identical files then keeps
the newest one.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ oldest\f[] \- removes identical files then keeps
the oldest one.
.IP \[bu] 2
\f[C]\-\-dedupe\-mode\ rename\f[] \- removes identical files then
renames the rest to be different.
.RE
.IP \[bu] 2
Bug fixes
.IP \[bu] 2
Make rclone check obey the \f[C]\-\-size\-only\f[] flag.
.IP \[bu] 2
Use "application/octet\-stream" if discovered mime type is invalid.
.IP \[bu] 2
Fix missing "quit" option when there are no remotes.
.IP \[bu] 2
Google Drive
.IP \[bu] 2
Increase default chunk size to 8 MB \- increases upload speed of big
files
.IP \[bu] 2
Speed up directory listings and make more reliable
.IP \[bu] 2
Add missing retries for Move and DirMove \- increases reliability
.IP \[bu] 2
Preserve mime type on file update
.IP \[bu] 2
Backblaze B2
.IP \[bu] 2
Enable mod time syncing
.RS 2
.IP \[bu] 2
This means that B2 will now check modification times
.IP \[bu] 2
It will upload new files to update the modification times
.IP \[bu] 2
(there isn\[aq]t an API to just set the mod time.)
.IP \[bu] 2
If you want the old behaviour use \f[C]\-\-size\-only\f[].
.RE
.IP \[bu] 2
Update API to new version
.IP \[bu] 2
Fix parsing of mod time when not in metadata
.IP \[bu] 2
Swift/Hubic
.IP \[bu] 2
Don\[aq]t return an MD5SUM for static large objects
.IP \[bu] 2
S3
.IP \[bu] 2
Fix uploading files bigger than 50GB
.RE
.IP \[bu] 2
v1.28 \- 2016\-03\-01 v1.28 \- 2016\-03\-01
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
@ -4230,6 +4414,13 @@ supported by the go runtime, ie earlier than version 2.6.23.
.PP .PP
See the system requirements section in the go install See the system requirements section in the go install
docs (https://golang.org/doc/install) for full details. docs (https://golang.org/doc/install) for full details.
.SS All my uploaded docx/xlsx/pptx files appear as archive/zip
.PP
This is caused by uploading these files from a Windows computer which
hasn\[aq]t got the Microsoft Office suite installed.
The easiest way to fix is to install the Word viewer and the Microsoft
Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later
versions\[aq] file formats
.SS License .SS License
.PP .PP
This is free software under the terms of MIT the license (check the This is free software under the terms of MIT the license (check the