Perforce Chronicle ships with a Search module powered by
Zend Search
Lucene. Lucene stores its search index under
"
".
chronicle root
/data/sites/site name
/search-index/
If Lucene is used when horizontal scaling is employed, the search index becomes fragmented across your various web servers causing it to function incorrectly.
We recommend disabling the Search module as detailed in Section 19.3, “Enabling and Disabling Modules”. Alternatively, the Lucene search index could be placed on shared network storage (e.g. using NFS).
Chronicle session data is stored using Zend Session. By default, session data is stored on the filesystem of the web server that responded to the request.
This configuration can still function when horizontal scaling is employed provided your load balancer is configured to always send users to the same web server once they establish a session. In Amazon's Elastic Load Balancer this is referred to as sticky user sessions. Using sticky sessions can lead to a less even load distribution across your web servers. Additionally, if a web-server goes down or is taken out of the cluster, users accessing that server become logged out.
Instead of utilizing session fixation, Chronicle can be configured to store session data in memcached, allowing requests to be serviced by any web server in the cluster. Sessions may be stored in the same memcached pool used for the default/page cache if sufficient space is available. Should your memcached pool run out of memory, less active records get purged, which could result in users being logged out. If you anticipate your existing memcached pool may become full during normal usage, it is recommend to use a dedicated pool for session storage.
To enable memcached based session storage, add the following to your application.ini:
[production] resources.session.savehandler.class = P4Cms_Session_SaveHandler_Cache resources.session.savehandler.options.backend.name = P4Cms_Cache_Backend_MemcachedTagged resources.session.savehandler.options.backend.customBackendNaming = 1 resources.session.savehandler.options.backend.options.servers.host = <memcached address>
Chronicle utilizes a default cache to accelerate common operations (e.g. parsing
all module.ini
and theme.ini
contents).
Additionally, a page cache is utilized to improve performance for common requests.
By default these caches are stored using the File-based Zend Cache Backend. When employing horizontal scaling, this configuration would result in numerous copies of each cache entry being stored. More importantly the cache would not be properly cleared across web servers, resulting in unstable operation or stale data being shown to end users.
To correct this we recommend using memcached as a shared cache backend. To enable the memcached cache backend, add the following to your application.ini:
[production] resources.cachemanager.default.backend.name = P4Cms_Cache_Backend_MemcachedTagged resources.cachemanager.default.backend.customBackendNaming = 1 resources.cachemanager.default.backend.options.servers.host = <memcached address> resources.cachemanager.page.backend.name = P4Cms_Cache_Backend_MemcachedTagged resources.cachemanager.page.backend.customBackendNaming = 1 resources.cachemanager.page.backend.options.servers.host = <memcached address>
To minimize web requests, Chronicle automatically aggregates the CSS and javascript assets used on your site.
By default, these aggregated assets are stored on the filesystem of the web server that
responded to the request. When employing horizontal scaling, the aggregated assets need
to be stored in a shared location. We recommend using Amazon's
S3 asset handler to store the aggregated assets in a
shared location. Alternatively, the folder
"
" can be
placed on network storage (e.g. using NFS) to provide all of the web servers access.
chronicle root
/data/resources/
To enable the S3 asset handler, add the following to your application.ini:
[production] resources.assethandler.class = P4Cms_AssetHandler_S3 resources.assethandler.options.bucket = <s3 bucket name> resources.assethandler.options.accessKey = <key> resources.assethandler.options.secretKey = <secret>
The S3 asset handler does not create the bucket, it must be manually created prior to configuring the asset handler.