allow to do propfinds on hugh addressbooks independent of memory_limit:
- regular groupdav_handler::profind() method gets split in a method just
computing a filter and a callback to run that filter on the backend
- groupdav_propfind_iterator class is returned from profind method
instead of an array with information about the files
- iterator calls groupdav_hander::propfind_callback if there are no more
entries from the previous call
- constructor of groupdav_propfind_iterator allows to pass an extra array
with files to return, to simplify modifying existing implementation
(were eg. information about the current path, get's supplied from
calling groupdav class).
only if you select more then one timezone to display in it)
- moved generation of beautified timezone array to egw_time
- moved all preferences hooks to a new class preferences_hooks (updated
version so setup updates hook data, or you need to call admin>>update
hooks)
--> recogniced it's so old and dusty, it does not make sense any more
--> moved content_header() method to html class
- fixed calls of browser->content_header to use html::content_header
instead
calendar, plus a first calendar implemenation.
This implementation just replaces following calendar_bo methods:
- date2ts($date,$user2server=False)
- date2array($date,$server2user=False)
- date2string($date,$server2user=False,$format='Ymd')
- format_date($date,$format='')
which static methods from egw_time.
If your server is in same timezone as the user, you should experience no
difference. As a small test, you can switch to an other timezone (eg.
UTC) to recognice on a weekly repeating event (which still repeats on
equal server time!) that it moves by one hour when daylight saving
changes. This switching to a TZ with different daylight saving rules,
was not working before.
Happy testing :-)
WebDAV RFC 4918 allows a full url or a path as <D:href>:
http://www.webdav.org/specs/rfc4918.html#ELEMENT_href
Some clients can NOT deal with one or the other:
- KAddressbook (at least in 3.5) can NOT subscribe to addressbooks (it
does not find them) if just a path is used
- iCal in OS X 10.6 generates wrong requests, if a full url is used
register cookies in $_REQUEST any more by default (there's now a php.ini
variable 'request_order' to controll that, but we want to work with a
default configuraltion):
- session restore was not working, as only $_REQUEST[sessionid] was checked
- multi domain installs not working, as domain cookie was not checked
- encrypted session were not working, because kp3 cookie was not checked
--> there's now a static method egw_session::get_request($name), which
checks $_REQUEST[$name], $_COOKIE[$name] and for that Safari bug also
$_COOKIE[ucfirst($name)]
--> overcome problem reported by krupka(at)depag.de on the german list: password get's lost if contact get saved, because it had to remove the account first to add the addressbook object classes
- docu update
"
- when updating all langs, do a transaction for each lang and update the cache after commiting the transaction (before the transaction was about all langs and the cache update withint the not commited transaction)
- fixed problem with autoloading of languages, check used filectime, update used filemtime, which can differ of cause"
- update the tree-level cache, as we can not effectivly unset it in a multiuser enviroment,
as users from other - not yet updated - instances update it again with an old version!
- add_app(): we have to use array_merge! (+= does not overwrite common translations with different ones in an app)
array_merge messes up translations of numbers, which make no sense and should be avoided anyway.
"
vfs_stream_wrapper::get_path() returning an url without path
(sqlfs://default <-- no trailing slash), which causes
sqlfs_stream_wrapper::url_stat() to be called for an empty path, which
gives the sql error.
- fs_link column is queried by sqlfs, but get created in 1.7.002
- pdo does not throw exception on sql errors, it silently ignores them
--> told it now in constructor to also throw exceptions
- egw_vfs::find() was not always honoring the url option
Patch is mostly created by script in egroupware/doc/fix_depricated.php in separate commit.
I do NOT advice to apply this patch to a production system (it's commited to trunk!), as the automatic modified regular expressions have a good change to break something ...
- cleaned up exceptions in cli code (no need to log, as it goes direct to the user)
- regarding small rpm redirect header (< 200 bytes) as no header
- fixed wrong detected vars for cli install (eg. webserver_url)
- fixed egw_cache to not stall if system_charset is not yet in db
Timesheet-Einträge mit Uhrzeit 0:00, die in der Winterzeit gemacht wurden,
werden in einem Union-Query, der in der Sommerzeit stattfindet, dem falschen
Tag zugeordnet. Vermutlich stimmt dies nur für den PostgreSQL query und ist
Dir deshalb bisher nicht aufgefallen.
Der Patch:
Ändert den PostgreSQL-spezifischen Teil des queries so um, dass er
Zeitzoneninformationen berücksichtigt."
- make update a separat setup-cmd-object
- fixed handling of egw_exception_wrong_userinput, to not include a
trace (which is unneeded for regular input-validation)
- can be used via html class like:
$clean_html = html::purify($html);
- using it now in eTemplate to remove malicious code from html:
a) when displaying "formatted text"
b) when "formatted text" get's input by the user
- if base is a symlink to a dir, this one is listed anyway
- symlinks to dirs (beside the base) are only descended, if
$options['follow'] is specified
- links are returned for type 'f'
--> removed follow option from find() in filemanager_ui::get_rows()
--> no more infinit symlink loops in filemanager GUI
limit on the number of cascaded folders in Filemanager
--> MySQL 5.0 has a nesting limit for subqueries
--> working around that by limiting the nesting level to 10"
trailing slashes added to all collections caused a rename to an empty filename
--> all training slashes get now removed prior to calling any backend functions"
Fatal error: Class 'notifications' not found in
/home/domain/public_html/egw/etemplate/inc/class.bo_tracking.inc.php
--> reverts an older commit fixing a problem between the (depracated and no longer working) browser app and the browser class in the API"
- caching the phrases in new egw_cache on Tree level
--> a good speed improvment on my devel system
- also added a global function
check_load_extension($extension,$throw=false)
It allows to cache on 4 levels:
a) tree: for all instances/domains runining on a certain source path
b) instance: for all sessions on a given instance
c) session: for all requests of a session, same as egw_session::appsession()
d) request: just for this request (same as using a static variable)
There's a get, a set and a unset method for each level: eg. getTree()
or setInstance(), as well as a variant allowing to specify the level as first
parameter: eg. unsetCache()
getXXX($app,$location,$callback=null,array $callback_params,$expiration=0)
has three optional parameters allowing to specify:
3. a callback if requested data is not yes stored. In that case the
callback is called and it's value is stored in the cache AND retured
4. parameters to pass to the callback as array, see call_user_func_array
5. an expiration time in seconds to specify how long data should be cached,
default 0 means infinit (this time is not garantied and not
supported for all levels!)
Data is stored under an application name and a location, like
egw_session::appsession().
In fact data stored at cache level egw_cache::SESSION, is stored in
the same way as egw_session::appsession() so both methods can be used
with each other.
The $app parameter should be either the app or the class name, which
both are unique.
The tree and instance wide cache uses a certain provider class, to
store the data eg. in memcached or if there's nothing else configured
in the filesystem (eGW's temp_dir).
using octal numbers with mysql leads to funny results:
select 384 & 0400 --> 384 not 256=0400
--> converted 0400, 040 and 04 to 256, 32 and 4 for mysql"
- egw_vfs::stat and egw_vfs_stream_wrapper::url_stat now both have a
parameter $try_create_home=false, which do not create a non-existing
home-directory by default.
- filemanger_ui calls egw_vfs::stat($path,true) to create an evtl.
missing home dir (in case it does not exist because of previous
problems)
--> fixes not working home-dir creation or rename, because url_stat
already tried to create the home-dir
and GroupDAV. The "sessionid" get's constructed from the basic auth
credentials and is not random (as the clients dont store them).
--> speeds up the use of *DAV
--> stops *DAV handlers to created numerious sessions
- fgetcsv only works correct, if setlocal is called with an existing and
correct local
- improved projectmanager method guess_local and moved it to
common::setlocal, which takes now the charset, lang and country of the
user into account
- csv-import also displays now the conversation done and reads usernames
in brackets
- added some missing fields
- all: false (default) = ignore files starting with a dot '.',
true = show all files (. and .. are always ignored!)
- exec: false (default) = do NOT allow to upload or modify scripts,
true = allow it (if docroot is mounted, this allows to run scripts!)
--> deny_script method was added to egw_vfs and calls to it from
filemanager
Other fixes:
- missing write rights of the webserver were not removed from perms
(causing warnings to be displayed in the ui)
- rename was not working due to typos
--> should be backported for obvious reasons to 1.6
- symlinks are now read by url_stat and dir_opendir and stored in the stat cache, to minimize DB accesses
- negative url_stat calles (not found) are now stored in the stat cache too, to minimize DB access
- added new log level 3, which adds method-name and line number to the sql statements"
- sqlfs can store now symlinks (implements symlink and readlink)
- vfs resolves symlinks before calling a mounted stream-wrapper
--> symlinks can be between different mount-points
- filemanger can create symlinks and follows them
- etemplate vfs_widget displays symlinks (to be improved)
Happy testing :-)
--> now you can find the error in the error_log and dont get only a blank page
(also optimized it so far, that we first try to autoload the class and use the diverse \"magic\" only if that fails)"
per session and then operate on that cache.
This deliminates ~25 database queries for each sitemgr page.
Test carefully before applying this to a production sytem!
- change the processing of slowsync, to use the content_map instead of
trying to build a new one. This caused duplication issues on the
client if multiple similar records where stored, because only the first
one found in the server-db was matched, These duplicate entries at client
side had no entry at serverside, so deleting the wrong one
on the client (the content with a valid map entry) could cause
unwanted data loss at server side, because it is impossible for the
user to see what is a duplicate, and what is not.
see also:
http://www.nabble.com/again---syncml-duplication-issue-to20333619s3741.html
- reenabled UID from syncml clients, because it was partly used this caused
issues during SlowSync if the content was changed.
- infolog, calendar if a uid is found in the provided data, allway try to
find the corresponding content first using only the UID, instead of
using the content-id taken from content_map.
also fixed:
- a few fixes in ./notes
- creating an entry on the client that can not be imported,
(Example, Nokia E Series Appointment without a Title)
will no longer create an invalid content-map entry
However, at client side this is still counted in the Protocol as
Server-Add