Before this change if a file was uploaded to a backend which didn't
support modtimes, the time of the file read after the upload had
completed would change to the time the file was uploaded on the
backend.
When using `--vfs-cache-mode writes` or `full` this time would be
different by the `--vfs-write-back` delay which would cause
applications to think the file had been modified.
This changes uses the last modification time read by the OS as a
virtual modtime for backends which don't support setting modtimes. It
does not change the modtime to that actually uploaded.
This means that as long as the file remains in the directory cache it
will have the expected modtime.
See: https://forum.rclone.org/t/saving-files-causes-wrong-modified-time-to-be-set-for-a-few-seconds-on-webdav-mount-with-bitrix24/36451
Before this change, if you renamed a directory containg files yet to
be uploaded then deleted the directory the files would still be
uploaded.
This fixes the problem by changing the directory path in all the file
objects in a directory when it is renamed. This wasn't necessary until
we introduced virtual files and directories which lived beyond the
directory flush mechanism.
Fixes#6809
Before this fix, opening a file with `O_CREATE|O_RDONLY` caused an IO error to
be returned when using `--vfs-cache-mode off` or `--vfs-cache-mode writes`.
This was because the file was opened with read intent, but the `O_CREATE`
implies write intent to create the file even though the file is opened
`O_RDONLY`.
This fix sets write intent for the file if `O_CREATE` is passed in which fixes
the problem for all the VFS cache modes.
It also extends the exhaustive open flags testing to `--vfs-cache-mode writes`
as well as `--vfs-cache-mode full` which would have caught this problem.
See: https://forum.rclone.org/t/i-o-error-trashing-file-on-sftp-mount/34317/
Before this patch a deadlock could occur if the cache cleaner was
running when an object upload finished.
This fixes the problem by delaying marking the object as clean until
we have notified the VFS layer. This means that the cache cleaner
won't consider the object until **after** the VFS layer has been
notified, thus avoiding the deadlock.
See: https://forum.rclone.org/t/rclone-mount-deadlock-when-dir-cache-time-strikes/33486/
Before this change we truncated files in the backing store regardless
of whether we needed to or not.
After, we check to see if the file is the right size and don't
truncate if it is.
Apparently Windows Defender likes to check executables each time they
are modified, and truncating a file to its existing size is enough to
trigger the Windows Defender scan. This was causing a big slowdown for
operations which opened and closed the file a lot, such as looking at
properties on an executable.
See: https://forum.rclone.org/t/for-mount-sftp-why-right-click-on-exe-file-is-so-slow-until-it-freezes/33830
Before this change if we copied files of unknown size, then they lost
their metadata.
This was particularly noticeable using --s3-decompress.
This change adds metadata to Rcat and RcatSized and changes Copy to
pass the metadata in when it calls Rcat for an unknown sized input.
Fixes#6546
Before this change the VFS cache cleaner would loop indefinitely while
the cache was above quota. This used up all the CPU.
This fix prevents the cache cleaner from looping. It will be kicked on
ENOSPACE and run in its scheduled time otherwise so this should be
sufficient.
See: https://forum.rclone.org/t/vfs-keeps-checking-same-files/32120
This message is a double panic and was actually caused by an assertion
panic in:
vfs/vfscache/downloaders/downloaders.go
This is triggered by the code added relatively recently to fix a bug
with renaming files:
ec72432cec vfs: fix failed to _ensure cache internal error: downloaders is nil error
So it appears that item.o may be nil at this point.
This patch detects item.o being nil and fetches it again with NewObject.
Fixes#6190Fixes#6235
Before this change we ran the tests and the mount in the same process.
This could cause deadlocks and often did, and made the mount tests
very unreliable.
This fixes the problem by running the mount in a seperate process and
commanding it via a pipe over stdin/stdout.
When using filepath.Dir, a difference to path.Dir is that it returns os PathSeparator
instead of slash when the path consists entirely of separators.
Also fixed casing of the function name, use OS in all caps instead of Os
as recommended here: https://github.com/golang/go/wiki/CodeReviewComments#initialisms
In this commit
f4c40bf79d mount: add --devname to set the device name sent to FUSE for mount display
The --devname parameter was added. However it was soon noticed that
attempting to mount via the rc gave this error:
mount helper error: fusermount: unknown option 'fsname'
mount FAILED: fusermount: exit status 1
This was because the DeviceName (and VolumeName) parameter was never
being initialised when the mount was called via the rc.
The fix for this was to refactor the rc interface so it called the
same Mount method as the command line mount which initialised the
DeviceName and VolumeName parameters properly.
This also fixes the cmd/mount tests which were breaking in the same
way but since they aren't normally run on the CI we didn't notice.
Fixes#6044
This error was caused by renaming an open file.
When the file was renamed in the cache, the downloaders were cleared,
however the downloaders were not re-opened when needed again, instead
this error was generated.
This fix re-opens the downloaders if they have been closed by renaming
the file.
Fixes#5984
Previously only the fs being checked on gets passed to
GetModifyWindow(). However, in most tests, the test files are
generated in the local fs and transferred to the remote fs. So the
local fs time precision has to be taken into account.
This meant that on Windows the time tests failed because the
local fs has a time precision of 100ns. Checking remote items uploaded
from local fs on Windows also requires a modify window of 100ns.
This is possible now that we no longer support go1.12 and brings
rclone into line with standard practices in the Go world.
This also removes errors.New and errors.Errorf from lib/errors and
prefers the stdlib errors package over lib/errors.
This replaces built-in os.MkdirAll with a patched version that stops the recursion
when reaching the volume part of the path. The original version would continue recursion,
and for extended length paths end up with \\? as the top-level directory, and the error
message would then be something like:
mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
Before this change file handles could get closed while the truncate
the file handles loop was running.
This would mean that ocassionally an ECLOSED (which is translated into
EBADF by cmd/mount) would spuriously be returned if Release happened
to happen in the middle of a Truncate call (Setattr called with
size=0).
This change ignores the ECLOSED while truncating file handles.
See: https://forum.rclone.org/t/writes-to-wasabi-mount-failing-with-bad-file-descriptor-intermittently/26321
Before this change, if there was an existing file being uploaded when
a file was renamed on top of it, then both would be uploaded. This
causes a duplicate in Google Drive as both files get uploaded at the
same time. This was triggered reliably by LibreOffice saving doc
files.
This fix removes any duplicates in the upload queue on rename.
Before this change but after:
aea8776a43 vfs: fix modtimes not updating when writing via cache #4763
When a file was opened read-only the modtime was read from the cached
file. However this modtime wasn't correct leading to an incorrect
result.
This change fixes the definition of `item.IsDirty` to be true only
when the data is dirty. This fixes the problem as a read only file
isn't considered dirty.
Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
The vfs-cache-max-size parameter is probably confusing to many users.
The cache cleaner checks cache size periodically at the --vfs-cache-poll-interval
(default 60 seconds) interval and remove cache items in the following order.
(1) cache items that are not in use and with age > vfs-cache-max-age
(2) if the cache space used at this time still is larger than
vfs-cache-max-size, the cleaner continues to remove cache items that are
not in use.
The cache cleaning process does not remove cache items that are currently in use.
If the total space consumed by in-use cache items exceeds vfs-cache-max-size, the
periodical cache cleaner thread does not do anything further and leaves the in-use
cache items alone with a total space larger than vfs-cache-max-size.
A cache reset feature was introduced in 1.53 which resets in-use (but not dirty,
i.e., not being updated) cache items when additional cache data incurs an ENOSPC
error. But this code was not activated in the periodical cache cleaning thread.
This patch adds the cache reset step in the cache cleaner thread during cache
poll to reset cache items until the total size of the remaining cache items is
below vfs-cache-max-size.
Before this fix, doing CTRL-C and CTRL-V on a file in Windows explorer
caused the **source** and the the destination to be truncated to 0.
This is because Windows opens the source file with Create with flags
`O_RDWR|O_CREATE|O_EXCL` but doesn't write to it - it only reads from
it. Rclone was taking the call to Create as a signal to always make a
new file, but this is incorrect.
This fix reads an existing file from the directory if it exists when
Create is called rather than always creating a new one. This fixes the
problem.
Fixes#5181
Before this change, if a directory was renamed and it or any children
had virtual entries in it they weren't flushed.
The consequence of this was that the directory path got out sync with
the actual position of the directory in the tree, leading to listings
of the old directory rather than the new one.
The fix renames any directories remaining after the ForgetAll to have
the correct path which fixes the problem.
See: https://forum.rclone.org/t/after-a-directory-renmane-using-mv-files-are-not-visible-any-longer/22797
When using --vfs-cache-mode writes or full if a file was opened for
write intent, the modtime was set and the file was closed without
being modified the modtime would never be written back to storage.
The sequence of events
- app opens file with write intent
- app does set modtime
- rclone sets the modtime on the cache file, but not the remote file
because it is open for write and can't be set yet
- app closes the file without changing it
- rclone doesn't upload the file because the file wasn't changed so
the modtime doesn't get updated
This fixes the problem by making sure any unapplied modtime changes
are applied even if the file is not modified when being closed.
Fixes#4795
Before this change, rclone would return the modification times of the
cache file or the pending modtime which would be more accurate than
the modtime that the backend was capable of.
This meant that the modtime would be change slightly when the item was
actually uploaded.
For example modification times on Google Drive would be rounded to the
nearest millisecond.
This fixes the VFS layer to always return modtimes directly from an
object stored on the remote, or rounded to the precision that the
remote is capable of.
Before this change, rclone would set the modification time of an
object after it had been uploaded. However with --vfs-cache-mode
writes and above, the modification time of the object is already
correct as the cache backing file gets set with the correct
modification time before upload.
Setting the modification time causes another version to be created on
backends such as S3 so it should be avoided if possible.
This change checks to see if the modification time needs changing and
only sets it if necessary.
See: https://forum.rclone.org/t/produce-2-versions-when-overwrite-an-object-in-min-io/19634
Some backends, most notably S3, do not report the amount of bytes used.
This patch introduces a new flag that allows instead of relying on the
backend, use recursive scan similar to `rclone size` to compute the total
used space. However, this is ineffective and should be used as a last resort.
Co-authored-by: Yves G <theYinYeti@yalis.fr>
Before this change options were read and set in native format. This
means for example nanoseconds for durations or an integer for
enumerated types, which isn't very convenient for humans.
This change enables these types to be set with a string with the
syntax as used in the command line instead, so `"10s"` rather than
`10000000000` or `"DEBUG"` rather than `8` for log level.
The initial ':' is included in the ad-hoc remote name, but is illegal character
in Windows path. Replacing it with '^', which is legal in filesystems but illegal
in regular remote names, so name conflict is avoided.
Fixes#4544
Before this change using --vfs-cache-mode full and --buffer-size 0
together caused the vfs downloader to open more and more downloaders.
This is fixed by introducing a minimum size of 1M for the window to
look for an existing downloader.
Fixes#4892
Add --network-mode option to activate mounting as network drive without having to set volume prefix.
Add support for automatic drive letter assignment (not specific to network drive mounting).
Allow full network share unc path in --volname, which will also implicitely activate network drive mounting.
Allow full network share unc path as mountpoint, which will also implicitely activate network drive mounting, and the specified path will be used as volume prefix and the remote will be mounted on an automatically assigned drive letter instead.
If --cache-dir is passed in as a relative path, then rclone will not
be able to turn it into a UNC path under Windows, which means that
file names longer than 260 chars will fail when stored in the cache.
This patch makes the --cache-dir path absolute before using it.
See: https://forum.rclone.org/t/handling-of-long-paths-on-windows-260-characters/20913
This is done by making fs.Config private and attaching it to the
context instead.
The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
Before this change if a file was uploaded through a mount, then
deleted externally, trying to upload that file again could give EEXIST
"file already exists".
This was because the file already existing in the cache was confusing
rclone into thinking it already had the file.
The fix is to check that if rclone has a stale cache file then to
ignore it in this situation.
See: https://forum.rclone.org/t/rclone-cant-reuse-filenames/20400
Before this change, if a file was created on a remote but deleted
externally from that remote then there was potential for the delete to
never be noticed.
The sequence of events was:
- Create file on VFS - creates virtual directory entry
- File deleted externally to remote before the directory refreshed
- Now the file has a virtual add but is not in the listings so will never disappear
This patch fixes it by removing all virtual directory entries except
the following when the directory is re-read.
- On remotes which can't have empty directories: virtual directory
adds are not flushed. These will remain virtual as long as the
directory is empty.
- For virtual file add: files that are in the process of being
uploaded are not flushed
This patch also adds the distinction between virtually added files and
directories.
It also refactors the virtual directory logic to make it easier to follow.
Fixes#4446
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
The missed update can cause incorrect before-cleaning cache stats
and a pre-mature condition broadcast in purgeOld before the cache
space use is reduced below the quota.
Add an exponentially increasing delay during retries up ENOSPC error
to avoid exhausting the 10 retries too soon when the cache space
recovery from item resets is not available from the file system yet
or consumed by other large cache writes.
Item reset is invoked by cache cleaner for synchronous recovery
from ENOSPC errors. The reset operation removes the cache file and
closes/reopens the downloaders. Although most parts of reset and
other item operations are done with the item mutex held, the mutex
is released during fd.WriteAt and downloaders calls. We used preAccess
and postAccess calls to serialize Reset, ReadAt, and Open, but missed
some other item operations. The patch adds preAccess/postAccess
calls in Sync, Truncate, Close, WriteAt, and rename.
A failed item reset is saved in the errItems for retryFailedResets
to process. If the item gets closed before the retry, the item may
have been removed from the c.item array. Previous code did not
account for this condition. This patch adds the check for the
exitence of the retry items in retryFailedResets.
The downloaders.Close() call acquires the downloaders' mutex before
calling the wait group wait and the main downloaders thread has a
periodical (5 seconds interval) call to kick its waiters and the
waiter dispatch function tries to get the mutex. So a deadlock can
occur if the Close() call starts, gets the mutex, while the main
downloader thread already got the timer's tick and proceeded to
call kickWaiters. The deadlock happens when the Close call gets
the mutex between the timer's kick and the main downloader thread
gets the mutex first. So it's a pretty short period of time and
it probably explains why the problem has not surfaced, maybe
something like tens of nanoseconds out of 5 seconds (~10^^-8).
It took 5 days of continued stressing the Close calls for the
deadlock to appear.
Before this change if a file was removed from the cache while rclone
is running then rclone would not notice and proceed to re-create it
full of zeros.
This change notices files that we expect to have data in going missing
and if they do logs an ERROR recovers.
It isn't recommended deleting files from the cache manually with
rclone running!
See: https://forum.rclone.org/t/corrupted-data-streaming-after-vfs-meta-files-removed/18997Fixes#4602
Before this change the error message was produced for every file which
was confusing users.
After this change we check for EOF and return from ReadAt at that
point.
See: https://forum.rclone.org/t/rclone-1-53-release/18880/10
This patch provides the support of synchronous cache space recovery
to allow read threads to recover from ENOSPC errors when cache space
can be recovered from cache items that are not in use or safe to be
reset/emptied .
The patch complements the existing cache cleaning process in two ways.
Firstly, the existing cache cleaning process is time-driven that runs
periodically. The cache space can run out while the cache cleaner
thread is still waiting for its next scheduled run. The io threads
encountering ENOSPC return an internal error to the applications
in this case even when cache space can be recovered to avoid this
error. This patch addresses this problem by having the read threads
kick the cache cleaner thread in this condition to recover cache
space preventing unnecessary ENOSPC errors from being seen by the
applications.
Secondly, this patch enhances the cache cleaner to support cache
item reset. Currently the cache purge process removes cache
items that are not in use. This may not be sufficient when the
total size of the working set exceeds the cache directory's
capacity. Like in the current code, this patch starts the purge
process by removing cache files that are not in use. Cache items
whose access times are older than vfs-cache-max-age are removed first.
After that, other not-in-use items are removed in LRU order until
vfs-cache-max-size is reached. If the vfs-cache-max-size (the quota)
is still not reached at this time, this patch adds a cache reset
step to reset/empty cache files that are still in use but not
dirtied. This enables application processes to continue without
seeing an error even when the working set depletes the cache space
as long as there is not a large write working set hoarding the
entire cache space.
By design this patch does not add ENOSPC error recovery for write
IOs. Rclone does not empty a write cache item until the file data
is written back to the backend upon close. Allowing more cache
space to be consumed by dirty cache items when the cache space is
already running low would increase the risk of exhausting the cache
space in a way that the vfs mount becomes unreadable.
Before this change we set the modtime of the cache file when all
writers had finished.
This has the unfortunate effect that the file is uploaded with the
wrong modtime which means on backends which can't set modtimes except
when uploading files it is wrong.
This change sets the modtime of the cache file immediately in the
cache and in turn sets the modtime in the file info.