- fix test case FsNewObjectCaseInsensitive (PR #4830)
- continue PR #4917, add comments in metadata detection code
- add warning about metadata detection in user documentation
- change metadata size limits, make room for future development
- hide critical chunker parameters from command line
Before this patch chunker required that there is at least one
data chunk to start checking for a composite object.
Now if chunker finds at least one potential temporary or control
chunk, it marks found files as a suspected composite object.
When later rclone tries a concrete operation on the object,
it performs postponed metadata read and decides: is this a native
composite object, incompatible metadata or just garbage.
Before this change attempting to return an error from core/command
failed with a 500 error and a message about unmarshable types.
This is because it was attempting to marshal the input parameters
which get _response added to them which contains an unmarshalable
field.
This was fixed by using the original parameters in the error response
rather than the one modified during the error handling.
This also adds end to end tests for the streaming facilities as used
in core/command.
Before this change calling core/command gave the error
error: response object is required expecting *http.ResponseWriter value for key "_response" (was *http.response)
This was because the http.ResponseWriter is an interface not an object.
Removing the `*` fixes the problem.
This also indicates that this bit of code wasn't properly tested.
This includes an HDFS docker image to use with the integration tests.
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Before this change, if one connection was authenticating this would
block any others from authenticating.
This was due to ssh.NewServerConn not being called in a go routine
after the Accept call.
This is fixed by running the ssh authentication in a go routine.
Thanks to @FiloSottile for advice on how to fix this.
See: https://github.com/golang/go/issues/43521
The current authentication scheme works without creating
a public download endpoint for a private bucket as in the B2 official blog.
On the contrary, if the existing authorization header gets duplicated
in the Cloudflare Workers script, one might receive 401 Unauthorized errors.
Implemented empty folder flag for ncdu browser interface. If there is
empty folder in the list the flag e is prepended before size. If there
is no empty folder this flag is ommited. It has the same behaviour as
original ncdu browser. (https://dev.yorhel.nl/ncdu/man)
Before this change the webdav backend didn't truncate Range requests
to the size of the object. Most webdav providers are OK with this (it
is RFC compliant), but it causes 4shared to return 500 internal error.
Because Range requests are used in mounting, this meant that mounting
didn't work for 4shared.
This change truncates the Range request to the size of the object.
See: https://forum.rclone.org/t/cant-copy-use-files-on-webdav-mount-4shared-that-have-foreign-characters/21334/