diff --git a/docs/content/s3.md b/docs/content/s3.md index c9c484c12..a3961c0a2 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -27,41 +27,55 @@ Choose a number from below 1) swift 2) s3 3) local - 4) drive + 4) google cloud storage + 5) dropbox + 6) drive type> 2 AWS Access Key ID. access_key_id> accesskey AWS Secret Access Key (password). secret_access_key> secretaccesskey -Endpoint for S3 API. +Region to connect to. Choose a number from below, or type in your own value * The default endpoint - a good choice if you are unsure. * US Region, Northern Virginia or Pacific Northwest. * Leave location constraint empty. - 1) https://s3.amazonaws.com/ - * US Region, Northern Virginia only. - * Leave location constraint empty. - 2) https://s3-external-1.amazonaws.com + 1) us-east-1 + * US West (Oregon) Region + * Needs location constraint us-west-2. + 2) us-west-2 [snip] * South America (Sao Paulo) Region * Needs location constraint sa-east-1. - 9) https://s3-sa-east-1.amazonaws.com -endpoint> 1 -Location constraint - must be set to match the Endpoint. + 9) sa-east-1 + * If using an S3 clone that only understands v2 signatures - eg Ceph - set this and make sure you set the endpoint. +10) other-v2-signature + * If using an S3 clone that understands v4 signatures set this and make sure you set the endpoint. +11) other-v4-signature +region> 1 +Endpoint for S3 API. +Leave blank if using AWS to use the default endpoint for the region. +Specify if using an S3 clone such as Ceph. +endpoint> +Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value * Empty for US Region, Northern Virginia or Pacific Northwest. 1) * US West (Oregon) Region. 2) us-west-2 + * US West (Northern California) Region. + 3) us-west-1 + * EU (Ireland) Region. + 4) eu-west-1 [snip] - * South America (Sao Paulo) Region. - 9) sa-east-1 location_constraint> 1 +Remote config -------------------- [remote] access_key_id = accesskey secret_access_key = secretaccesskey -endpoint = https://s3.amazonaws.com/ +region = us-east-1 +endpoint = location_constraint = -------------------- y) Yes this is OK @@ -104,3 +118,48 @@ files in the bucket. The modified time is stored as metadata on the object as `X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns. + +### Multipart uploads ### + +rclone supports multipart uploads with S3 which means that it can +upload files bigger than 5GB. Note that files uploaded with multipart +upload don't have an MD5SUM. + +### Ceph ### + +Ceph is an object storage system which presents an Amazon S3 interface. + +To use rclone with ceph, you need to set the following parameters in +the config. + +``` +access_key_id = Whatever +secret_access_key = Whatever +endpoint = https://ceph.endpoint.goes.here/ +region = other-v2-signature +``` + +Note also that Ceph sometimes puts `/` in the passwords it gives +users. If you read the secret access key using the command line tools +you will get a JSON blob with the `/` escaped as `\/`. Make sure you +only write `/` in the secret access key. + +Eg the dump from Ceph looks something like this (irrelevant keys +removed). + +``` +{ + "user_id": "xxx", + "display_name": "xxxx", + "keys": [ + { + "user": "xxx", + "access_key": "xxxxxx", + "secret_key": "xxxxxx\/xxxx" + } + ], +} +``` + +Because this is a json dump, it is encoding the `/` as `\/`, so if you +use the secret key as `xxxxxx/xxxx` it will work fine.