Hugo has been making this warning for a while
WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.
This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.
The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
Hugo has been making this warning for a while
WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.
It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html
Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.
After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.
See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
QingStor support have disabled the integration test account with this message
尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。
Which google translate renders as
> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.
This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error
Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory
Because a file with the same name was created as a file in the src and
a dir in the dst.
This fixes it by using determinstic seeds each time.
Before this change backends which supported more than one hash (eg
pcloud) or backends which wrapped backends supporting more than one
hash (combine) would fail the TestMultithreadCopy and
TestMultithreadCopyAbort with an error like
Failed to make new multi hasher: requested set 000001ff contains unknown hash types
This was caused by the tests limiting the globally available hashes to
the first hash supplied by the backend.
This was added in this commit
d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails
to overcome the tests taking >100s on the local backend because they
made every single hash that the local backend. It brought this time
down to 20s.
This commit fixes the problem and retains the CPU speedup by only
applying the fix from the original commit if the destination backend
is the local backend. This fixes the common case (testing on the local
backend). This does not fix the problem for a backend which wraps the
local backend (eg combine) but this is run only on the integration
test machine and not on all the CI.
Previously, the fixed 10MB chunk size could lead to exceeding the maximum
allowed number of parts for very large files. Similar to other backends, options for
chunk size and upload concurrency are now user-configurable. Additionally,
the internal library is used to automatically adjust chunk size to prevent exceeding
upload part limitations.
Fixes#7850
This commit removes the test for PublicLink as it is not currently supported in the test environment.
This removes it from the integration tests to avoid meaningless retries.
This lets you, for example, use shared folders without mounting them
into your home namespace first, as long as you know their namespace ID.
(The --dropbox-shared-folders flag could thus be changed to not need to
mount the shared folder first, but I'm not doing that here as it's a
behavior change, who knows, maybe somebody relies on it.)
Before this change, serve s3 did not consistently save the correct modtime value
in memory after putting or copying an object, which could sometimes cause an
incorrect modtime to be returned. This change fixes the issue by ensuring that
both "mtime" and "X-Amz-Meta-Mtime" are updated in b.meta when we have fresh data.
The issue was discovered on the TestBisyncRemoteRemote/ext_paths test.
Before this change, renaming a directory d failed to rename its key in
d.parent.items, which caused trouble later when doing Dir.Stat on a
subdirectory. This change fixes the issue.
`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
This attempts to resolve upload conflicts by implementing cancel/cleanup on failed
uploads
* fix panic error on defer cancel upload
* increase pacer min sleep from 10 to 100 ms
* stop using uploadByForm()
* introduce force sleep before and after async tasks
* use pacer's retry scheme instead of manual implementation
Fixes#7787
Before this change some of the integration tests were producing this error
panic: runtime error: invalid memory address or nil pointer dereference
This was caused by an `fs.Object` of which the type (`*Object`) was
not `nil`, but the value within was `nil`. These do not compare as
`nil` leading to the panic.
This is a classic Go gotcha: https://go.dev/doc/faq#nil_error
This was easily fixed by changing the type of one function to return
fs.Object instead of *Object.
Before this change we waited a minimum of 10ms between API calls for
mailru.
The tests no longer pass at this rate, so this increases the time to
100ms.
See #7768
Before this fix we attempted to copy metadata to SFTP backends despite
them not being capable of it.
This fixes the problem by making the need to copy metadata explicit
rather than implicit in a value being present or not.
In this commit
6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk
We broke the setting of modification times when doing multipart
transfers from a backend which didn't support metadata to a backend
which did support metadata.
This was fixed by setting the "mtime" in the metadata if it was
missing.