2015-09-03 00:37:42 +02:00
|
|
|
---
|
2016-07-11 13:42:44 +02:00
|
|
|
title: "Amazon Drive"
|
|
|
|
description: "Rclone docs for Amazon Drive"
|
|
|
|
date: "2016-07-11"
|
2015-09-03 00:37:42 +02:00
|
|
|
---
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
<i class="fa fa-amazon"></i> Amazon Drive
|
2015-09-03 00:37:42 +02:00
|
|
|
-----------------------------------------
|
|
|
|
|
|
|
|
Paths are specified as `remote:path`
|
|
|
|
|
|
|
|
Paths may be as deep as required, eg `remote:directory/subdirectory`.
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
The initial setup for Amazon Drive involves getting a token from
|
2015-09-03 00:37:42 +02:00
|
|
|
Amazon which you need to do in your browser. `rclone config` walks
|
|
|
|
you through it.
|
|
|
|
|
|
|
|
Here is an example of how to make a remote called `remote`. First run:
|
|
|
|
|
|
|
|
rclone config
|
|
|
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
|
|
|
```
|
|
|
|
n) New remote
|
|
|
|
d) Delete remote
|
|
|
|
q) Quit config
|
|
|
|
e/n/d/q> n
|
|
|
|
name> remote
|
2016-02-21 14:39:04 +01:00
|
|
|
Type of storage to configure.
|
|
|
|
Choose a number from below, or type in your own value
|
2016-07-11 13:42:44 +02:00
|
|
|
1 / Amazon Drive
|
2016-02-21 14:39:04 +01:00
|
|
|
\ "amazon cloud drive"
|
|
|
|
2 / Amazon S3 (also Dreamhost, Ceph)
|
|
|
|
\ "s3"
|
|
|
|
3 / Backblaze B2
|
|
|
|
\ "b2"
|
|
|
|
4 / Dropbox
|
|
|
|
\ "dropbox"
|
|
|
|
5 / Google Cloud Storage (this is not Google Drive)
|
|
|
|
\ "google cloud storage"
|
|
|
|
6 / Google Drive
|
|
|
|
\ "drive"
|
|
|
|
7 / Hubic
|
|
|
|
\ "hubic"
|
|
|
|
8 / Local Disk
|
|
|
|
\ "local"
|
|
|
|
9 / Microsoft OneDrive
|
|
|
|
\ "onedrive"
|
|
|
|
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
|
|
|
\ "swift"
|
|
|
|
11 / Yandex Disk
|
|
|
|
\ "yandex"
|
|
|
|
Storage> 1
|
2015-10-03 15:23:12 +02:00
|
|
|
Amazon Application Client Id - leave blank normally.
|
2015-09-03 00:37:42 +02:00
|
|
|
client_id>
|
2015-10-03 15:23:12 +02:00
|
|
|
Amazon Application Client Secret - leave blank normally.
|
2015-09-03 00:37:42 +02:00
|
|
|
client_secret>
|
|
|
|
Remote config
|
2015-09-11 14:26:51 +02:00
|
|
|
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
|
|
|
Log in and authorize rclone for access
|
|
|
|
Waiting for code...
|
|
|
|
Got code
|
2015-09-03 00:37:42 +02:00
|
|
|
--------------------
|
|
|
|
[remote]
|
|
|
|
client_id =
|
|
|
|
client_secret =
|
|
|
|
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
|
|
|
|
--------------------
|
|
|
|
y) Yes this is OK
|
|
|
|
e) Edit this remote
|
|
|
|
d) Delete this remote
|
|
|
|
y/e/d> y
|
|
|
|
```
|
|
|
|
|
2016-01-07 16:20:32 +01:00
|
|
|
See the [remote setup docs](/remote_setup/) for how to set it up on a
|
|
|
|
machine with no Internet browser available.
|
|
|
|
|
2015-09-03 00:37:42 +02:00
|
|
|
Note that rclone runs a webserver on your local machine to collect the
|
2015-09-11 14:26:51 +02:00
|
|
|
token as returned from Amazon. This only runs from the moment it
|
|
|
|
opens your browser to the moment you get back the verification
|
2015-09-03 00:37:42 +02:00
|
|
|
code. This is on `http://127.0.0.1:53682/` and this it may require
|
|
|
|
you to unblock it temporarily if you are running a host firewall.
|
|
|
|
|
|
|
|
Once configured you can then use `rclone` like this,
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
List directories in top level of your Amazon Drive
|
2015-09-03 00:37:42 +02:00
|
|
|
|
|
|
|
rclone lsd remote:
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
List all the files in your Amazon Drive
|
2015-09-03 00:37:42 +02:00
|
|
|
|
|
|
|
rclone ls remote:
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
To copy a local directory to an Amazon Drive directory called backup
|
2015-09-03 00:37:42 +02:00
|
|
|
|
|
|
|
rclone copy /home/source remote:backup
|
|
|
|
|
|
|
|
### Modified time and MD5SUMs ###
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
Amazon Drive doesn't allow modification times to be changed via
|
2015-09-03 00:37:42 +02:00
|
|
|
the API so these won't be accurate or used for syncing.
|
|
|
|
|
|
|
|
It does store MD5SUMs so for a more accurate sync, you can use the
|
|
|
|
`--checksum` flag.
|
|
|
|
|
|
|
|
### Deleting files ###
|
|
|
|
|
|
|
|
Any files you delete with rclone will end up in the trash. Amazon
|
|
|
|
don't provide an API to permanently delete files, nor to empty the
|
|
|
|
trash, so you will have to do that with one of Amazon's apps or via
|
2016-07-11 13:42:44 +02:00
|
|
|
the Amazon Drive website.
|
2015-09-03 00:37:42 +02:00
|
|
|
|
2016-10-18 18:33:41 +02:00
|
|
|
### Using with non `.com` Amazon accounts ###
|
|
|
|
|
|
|
|
Let's say you usually use `amazon.co.uk`. When you authenticate with
|
|
|
|
rclone it will take you to an `amazon.com` page to log in. Your
|
|
|
|
`amazon.co.uk` email and password should work here just fine.
|
|
|
|
|
2016-01-30 19:08:44 +01:00
|
|
|
### Specific options ###
|
|
|
|
|
|
|
|
Here are the command line options specific to this cloud storage
|
|
|
|
system.
|
|
|
|
|
|
|
|
#### --acd-templink-threshold=SIZE ####
|
|
|
|
|
|
|
|
Files this size or more will be downloaded via their `tempLink`. This
|
2016-07-11 13:42:44 +02:00
|
|
|
is to work around a problem with Amazon Drive which blocks downloads
|
|
|
|
of files bigger than about 10GB. The default for this is 9GB which
|
|
|
|
shouldn't need to be changed.
|
2016-01-30 19:08:44 +01:00
|
|
|
|
|
|
|
To download files above this threshold, rclone requests a `tempLink`
|
|
|
|
which downloads the file through a temporary URL directly from the
|
|
|
|
underlying S3 storage.
|
|
|
|
|
2016-11-05 14:57:03 +01:00
|
|
|
#### --acd-upload-wait-per-gb=TIME ####
|
2016-08-18 18:49:49 +02:00
|
|
|
|
|
|
|
Sometimes Amazon Drive gives an error when a file has been fully
|
|
|
|
uploaded but the file appears anyway after a little while. This
|
2016-10-17 17:18:56 +02:00
|
|
|
happens sometimes for files over 1GB in size and nearly every time for
|
2016-11-05 14:57:03 +01:00
|
|
|
files bigger than 10GB. This parameter controls the time rclone waits
|
2016-10-17 17:18:56 +02:00
|
|
|
for the file to appear.
|
|
|
|
|
2016-11-05 14:57:03 +01:00
|
|
|
The default value for this parameter is 3 minutes per GB, so by
|
|
|
|
default it will wait 3 minutes for every GB uploaded to see if the
|
|
|
|
file appears.
|
2016-10-17 17:18:56 +02:00
|
|
|
|
2016-11-05 14:57:03 +01:00
|
|
|
You can disable this feature by setting it to 0. This may cause
|
|
|
|
conflict errors as rclone retries the failed upload but the file will
|
|
|
|
most likely appear correctly eventually.
|
2016-10-17 17:18:56 +02:00
|
|
|
|
|
|
|
These values were determined empirically by observing lots of uploads
|
|
|
|
of big files for a range of file sizes.
|
|
|
|
|
|
|
|
Upload with the `-v` flag to see more info about what rclone is doing
|
|
|
|
in this situation.
|
2016-08-18 18:49:49 +02:00
|
|
|
|
2015-09-03 00:37:42 +02:00
|
|
|
### Limitations ###
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
Note that Amazon Drive is case insensitive so you can't have a
|
2015-09-03 00:37:42 +02:00
|
|
|
file called "Hello.doc" and one called "hello.doc".
|
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
Amazon Drive has rate limiting so you may notice errors in the
|
2015-09-03 00:37:42 +02:00
|
|
|
sync (429 errors). rclone will automatically retry the sync up to 3
|
|
|
|
times by default (see `--retries` flag) which should hopefully work
|
|
|
|
around this problem.
|
2016-01-04 13:23:33 +01:00
|
|
|
|
2016-07-11 13:42:44 +02:00
|
|
|
Amazon Drive has an internal limit of file sizes that can be uploaded
|
|
|
|
to the service. This limit is not officially published, but all files
|
|
|
|
larger than this will fail.
|
2016-01-04 13:23:33 +01:00
|
|
|
|
|
|
|
At the time of writing (Jan 2016) is in the area of 50GB per file.
|
|
|
|
This means that larger files are likely to fail.
|
|
|
|
|
|
|
|
Unfortunatly there is no way for rclone to see that this failure is
|
|
|
|
because of file size, so it will retry the operation, as any other
|
2016-11-17 18:30:49 +01:00
|
|
|
failure. To avoid this problem, use `--max-size 50000M` option to limit
|
2016-11-27 19:18:47 +01:00
|
|
|
the maximum size of uploaded files. Note that `--max-size` does not split
|
|
|
|
files into segments, it only ignores files over this size.
|