Unfortunately bcrypt only hashes the first 72 bytes of a given input
which meant that using it on ssh keys which are longer than 72 bytes
was incorrect.
This swaps over to using sha256 which should be adequate for the
purpose of protecting in memory passwords where the unencrypted
password is likely in memory too.
The failure is this which is not reproducable locally, only on the CI
servers.
--- FAIL: TestMount/CacheMode=minimal/TestWriteFileOverwrite (1.01s)
fs.go:351:
Error Trace: fs.go:351
write.go:65
Error: Received unexpected error:
open E:testwrite: The request could not be performed because of an I/O device error.
Test: TestMount/CacheMode=minimal/TestWriteFileOverwrite
The corresponding ERROR from the log is this:
ERROR : IO error: truncate C:\Users\runneradmin\AppData\Local\rclone\vfs\local\C\Users\RUNNER~1\AppData\Local\Temp\rclone298719627\testwrite: Access is denied.
Instead of using ioutil.WriteFile this fix uses an equivalent based on
rclone's lib/file which doesn't set the exclusive flag on
Windows. This allows files to be deleted that are open. It also
deletes existing files if an error is received and retries.
For few commands, RClone counts a error multiple times. This was fixed by
creating a new error type which keeps a flag to remember if the error has
already been counted or not. The CountError function now wraps the original
error eith the above new error type and returns it.
Before this change the race tests were taking too long. The bcrypt
function went from about 20ms to 1s under the race detector and this
is called for every transaction on webdav.
This change reduces the bcrypt strength so it takes 1ms non race so
the race tests pass and still has adequate security for in memory only
storage.
On google fs (drive, google photos, and google cloud storage), if
headless is selected, do not open browser.
This also supplies a new option "auth-no-open-browser" for authorize
if the user does not want it.
This should fix#3323.
Before this change `rclone mount` would give this error on FreeBSD
mount helper error: mount_fusefs: -o timeout=: option not supported
Because the default value for FreeBSD was set to 15m for
--daemon-timeout and that FreeBSD does not support the timeout option.
This change sets the default for --daemon-timeout to 0 on FreeBSD
which fixes the problem.
Fixes#3610
Before this change the sftp handler returned a nil error for unknown
operations which meant the server crashed when one was encountered.
In particular the "Readlink" operations was causing problems.
After this change the handler returns ErrSshFxOpUnsupported which
signals to the remote end that we don't support that operation.
See: https://forum.rclone.org/t/rclone-serve-sftp-not-working-in-windows/12209
If a file handle is duplicated with dup() and the duplicate handle is
flushed, rclone will go ahead and close the file, making the original
file handle stale. This change removes the close() call from Flush() and
replaces it with FlushWrites() so that the file only gets closed when
Release() is called. The new FlushWrites method takes care of actually
writing the file back to the underlying storage.
Fixes#3381
Seems to be some corner cases that are not being handled, so taking a different
approach that should be a little more robust.
Also, changing resources to be served under a subpath: We've been serving
media at /res?path=%2Fdir%2Ffilename.mp4; change that to be just /r/dir/filename.mp4.
It's cleaner, easier to reason about, and a necessary first step towards just
serving the resources via httplib anyway.
This problem was introduced in "mount: allow files of unkown size to
be read properly" 0baafb158f by failure to check that the
DirEntry was nil or not.
Allows for filename.srt, filename.en.srt, etc., to be automatically associated with video.mp4 (or whatever) when playing over dlna.
This is the "modern" method, which I've verified to work on VLC and in LG webOS 2. There is a vendor specific mechanism for Samsung that I havn't been able to get working on my F series.
Also made some minor corrections to logging and container IDs.
Before this change, files of unknown size (eg Google Docs) would
appear in file listings with 0 size and would only allow 0 bytes to be
read.
This change sets the direct_io flag in the FUSE return which bypasses
the cache for these files. This means that they can be read properly.
This is compatible with some, but not all applications.
This detects the presence of a VT100 terminal by using the TERM
environment variable and switches to using VT100 codes directly under
windows if it is found.
This makes --progress work correctly with git bash.
Before this change it was possible to make a remote with an invalid
name in the config file, either manually or with `rclone config
create` (but not with `rclone config`).
When this remote was used, because it was invalid, rclone would
presume this remote name was a local directory for a very suprising
user experience!
This change checks remote names more carefully and returns errors
- when the user tries to use an invalid remote name on the command line
- when an invalid remote name is used in `rclone config create/update/password`
- when the user tries to enter an invalid remote name in `rclone config`
This does not prevent the user entering a remote name with invalid
characters in the config manually, but such a remote will fail
immediately when it is used on the command line.
A workaround for #3489. Code in `__rclone_custom_func` relies on process substitutions `<(...)` to preserve changes of variables within `while` bodies, which is not supported in the posix mode.