* Using single object to uploaded when files less than or equal to 67108864 bytes
* Using multi-part object to uploaded when files large than 67108864 bytes, and
calculate MD5SUMS in the upload process
* For Mkdir and Rmdir, Add block to wait qingstor service sync status to
handling extreme cases that try to create a just deleted bucket or delete
a just created bucket etc
* Fixup bitrot (rclone and Azure library)
* Implement Copy
* Add modtime to metadata under mtime key as RFC3339Nano
* Make multipart upload work
* Make it pass the integration tests
* Fix uploading of zero length blobs
* Rename to azureblob as it seems likely we will do azurefile
* Add docs
Add new package qingstor to support QingStor API.
Add new unit test for its and tested through; But I commented
on some tests case because of some of the features of QingStor.
Add new docs for it.
* Fix remaining problems
* Refactor to make testing easier and add a test suite
* Make path parsing more robust.
* Add single file operations
* Add MimeType reading for objects
* Add documentation
* Note go1.7+ is required to build
* add support to hashing module
* add dbhashsum to list the hashes
* add support to dropbox module
This means objects up and downloaded to/from Dropbox will have their
hashes checked.
Note after this change local objects are calculating MD5, SHA1 and
DBHASH which is excessive and needs to be fixed.