- caching the phrases in new egw_cache on Tree level
--> a good speed improvment on my devel system
- also added a global function
check_load_extension($extension,$throw=false)
It allows to cache on 4 levels:
a) tree: for all instances/domains runining on a certain source path
b) instance: for all sessions on a given instance
c) session: for all requests of a session, same as egw_session::appsession()
d) request: just for this request (same as using a static variable)
There's a get, a set and a unset method for each level: eg. getTree()
or setInstance(), as well as a variant allowing to specify the level as first
parameter: eg. unsetCache()
getXXX($app,$location,$callback=null,array $callback_params,$expiration=0)
has three optional parameters allowing to specify:
3. a callback if requested data is not yes stored. In that case the
callback is called and it's value is stored in the cache AND retured
4. parameters to pass to the callback as array, see call_user_func_array
5. an expiration time in seconds to specify how long data should be cached,
default 0 means infinit (this time is not garantied and not
supported for all levels!)
Data is stored under an application name and a location, like
egw_session::appsession().
In fact data stored at cache level egw_cache::SESSION, is stored in
the same way as egw_session::appsession() so both methods can be used
with each other.
The $app parameter should be either the app or the class name, which
both are unique.
The tree and instance wide cache uses a certain provider class, to
store the data eg. in memcached or if there's nothing else configured
in the filesystem (eGW's temp_dir).
using octal numbers with mysql leads to funny results:
select 384 & 0400 --> 384 not 256=0400
--> converted 0400, 040 and 04 to 256, 32 and 4 for mysql"
- egw_vfs::stat and egw_vfs_stream_wrapper::url_stat now both have a
parameter $try_create_home=false, which do not create a non-existing
home-directory by default.
- filemanger_ui calls egw_vfs::stat($path,true) to create an evtl.
missing home dir (in case it does not exist because of previous
problems)
--> fixes not working home-dir creation or rename, because url_stat
already tried to create the home-dir
and GroupDAV. The "sessionid" get's constructed from the basic auth
credentials and is not random (as the clients dont store them).
--> speeds up the use of *DAV
--> stops *DAV handlers to created numerious sessions
- fgetcsv only works correct, if setlocal is called with an existing and
correct local
- improved projectmanager method guess_local and moved it to
common::setlocal, which takes now the charset, lang and country of the
user into account
- csv-import also displays now the conversation done and reads usernames
in brackets
- added some missing fields
- all: false (default) = ignore files starting with a dot '.',
true = show all files (. and .. are always ignored!)
- exec: false (default) = do NOT allow to upload or modify scripts,
true = allow it (if docroot is mounted, this allows to run scripts!)
--> deny_script method was added to egw_vfs and calls to it from
filemanager
Other fixes:
- missing write rights of the webserver were not removed from perms
(causing warnings to be displayed in the ui)
- rename was not working due to typos
--> should be backported for obvious reasons to 1.6
- symlinks are now read by url_stat and dir_opendir and stored in the stat cache, to minimize DB accesses
- negative url_stat calles (not found) are now stored in the stat cache too, to minimize DB access
- added new log level 3, which adds method-name and line number to the sql statements"
- sqlfs can store now symlinks (implements symlink and readlink)
- vfs resolves symlinks before calling a mounted stream-wrapper
--> symlinks can be between different mount-points
- filemanger can create symlinks and follows them
- etemplate vfs_widget displays symlinks (to be improved)
Happy testing :-)
--> now you can find the error in the error_log and dont get only a blank page
(also optimized it so far, that we first try to autoload the class and use the diverse \"magic\" only if that fails)"
per session and then operate on that cache.
This deliminates ~25 database queries for each sitemgr page.
Test carefully before applying this to a production sytem!
- change the processing of slowsync, to use the content_map instead of
trying to build a new one. This caused duplication issues on the
client if multiple similar records where stored, because only the first
one found in the server-db was matched, These duplicate entries at client
side had no entry at serverside, so deleting the wrong one
on the client (the content with a valid map entry) could cause
unwanted data loss at server side, because it is impossible for the
user to see what is a duplicate, and what is not.
see also:
http://www.nabble.com/again---syncml-duplication-issue-to20333619s3741.html
- reenabled UID from syncml clients, because it was partly used this caused
issues during SlowSync if the content was changed.
- infolog, calendar if a uid is found in the provided data, allway try to
find the corresponding content first using only the UID, instead of
using the content-id taken from content_map.
also fixed:
- a few fixes in ./notes
- creating an entry on the client that can not be imported,
(Example, Nokia E Series Appointment without a Title)
will no longer create an invalid content-map entry
However, at client side this is still counted in the Protocol as
Server-Add
manufacturer and the recogniced GroupDAV client as product name.
This way we are able to handle different GroupDAV clients, as we
allready do with different SyncML clients.
Also removed the no longer needed code enabling the use of the real UID,
as SyncML does no longer misuse the UID for it's GUID.
longer use GUIDs containing eGW's install_id, as the information is
irrellevant for SyncML and cause doublications of entries if the
install_id changes.
I plan to have a new rc4 Wednesday or Thursday containing these changes.
- adding the application ('syncml')
- replacing next_record()/f() with fetch()/fetchSingle() or looping over the result object
Thanks to Philip Herbert from Knauber for testing it"