New XenForo setup questions (clustering, caching etc.)

Ketola

Member
I'm setting up XenForo on our small cluster of servers, and I'd appreciate any tips & pointers other XenForo users might have regarding setup and deployment.

Our cluster consists of 3 database servers (1 master, 2 slaves) running MySQL 5.5.15 (dual E5620, 32GB RAM, 12*15k RPM SAS drives in RAID6), 4+1 web backends (single X3440, 8GB RAM, 2*7k RPM SATA drives in RAID1), and a couple of other servers for serving static content and proxying the web backends with Varnish.

We're currently only running Coldfusion (or rather OpenBD) on top of Tomcat, so there's no existing PHP environment on the servers.

I understand that XenForo, at least for the time being, is unable to utilize the slave databases for read queries. We can forget about those then.

My questions at this point are:
  1. What would be the best way for setting up XenForo in a cluster of web servers, any of which can go offline at any given time for maintenance etc.? I'm currently considering using XCache for opcode cacing and a distributed & redundant memcached for session storage.

    How about attachments? Is the attachment storage folder a configurable option, or should I just NFS mount the data and internal_data folders on all the backends?

  2. Are there potential pitfalls in the above setup? I.e. if sessions are stored in memcached, should/does XenForo care if requests are bounced around between different backends. I can't see why it would, but would be nice knowing beforehand if it does. =)

    To begin with it wouldn't be a major headache having Xenforo on just one of the backends without any clustering. I don't see any chance of running into performance issues with our initial launch. Later after importing roughly over 5 million posts from our existing forums this could be an issue - especially once the search engine bots crawl in.

  3. Is there a way of preventing XenForo from creating sessions for guest users / robots? We employ Varnish quite extensively, and would like to do so with XenForo as well. The fact that the xf_session cookie is created and maintained for all visitors makes this rather unfeasible.

    Of course I could just drop the Set-Cookie header in Varnish unless the request URL is /login/login, but I suspect that would cause XenForo to create orphan sessions on every single page load that hits the backend.

  4. How about detaching JS and CSS completely from XenForo? What I mean by this is the ability to serve CSS and JS from a separate server with no XenForo installed. Can this be done? I know JS, data and style images can be served from a separate URL, but I'm looking for a way of "pre-rendering" the CSS and serving it from a static server.

    Even though it's not such a big issue nowadays, serving css files as "css.php?css=xenforo,form,public&style=1&dir=LTR&d=1365079688" can cause some proxies not to cache the content. More importantly loading the CSS through css.php prevents serving the CSS from a cookieless, CDN hosted domain.
    Yes, the css.php sets expiry to 2020, but it's all about the first page load, empty cache experience.
Thank you for reading all the way down here. =) I've probably just scratched the surface with these questions, but they're probably the most important ones in deciding which way to proceed. I have several more questions regarding other (frontend) optimizations, but this is probably not the place for those.

All input, comments, feedback etc. is much appreciated!
 
Heh thats quite some setup.

1) The NFS option
2) I don't see any issues occuring. I've not heard of any clustered or failover setups complaining.
3) Mike would probably be your best bet with this question
4) JS can already be set to external sources, CSS however I think not.
 
I just finished doing a similar setup, so let me share my experience

Ketola said:
What would be the best way for setting up XenForo in a cluster of web servers, any of which can go offline at any given time for maintenance etc.? I'm currently considering using XCache for opcode cacing and a distributed & redundant memcached for session storage.

You have two places in which dynamic data is uploaded, one is the database, and the other one is the filesystem for attachments.

We are using XCache for the opcode caching, over php-fpm. I prefer memcache for the object storage.

You can use memcache to store the session information, the datastore, and to cache other smallish things (like CSS minify), you can either have a separate cluster for memcache, or load them in the boxes themselves (which will prevent you from taking the box completely down). The session can be stored in the database, but memcached seems marginally faster for us

Ketola said:
How about attachments? Is the attachment storage folder a configurable option, or should I just NFS mount the data and internal_data folders on all the backends?

In the config.php you can set $config['data'] AND $config['internal_data'] to point to any folder you want, you can either have these 1) Pointing to your SAN mount or 2) pointing to an NFS mount

I am trying a more advanced setup in which, instead of having the filesystem mounted on 3 servers, I am load balancing attachment.php and some other files to just one particular server so I can concentrate all the uploads in that particular one.

If you do an NFS mount, you will need to patch some files (details here: http://xenforo.com/community/threads/mini-bug-attachments-do-not-work-on-nfs.47296/) since the default installation has a bug and can't write to an NFS mount. That will be resolved in a future version


Ketola said:
[*]Is there a way of preventing XenForo from creating sessions for guest users / robots? We employ Varnish quite extensively, and would like to do so with XenForo as well. The fact that the xf_session cookie is created and maintained for all visitors makes this rather unfeasible.

There is no way, and it is mildly annoying. I have been meaning to put a suggestion for this. This is complicating putting a reverse caching or varnish in front of it, but it is not impossible.

You could configure varnish (or something else) to pass-through any request that contains an xf_user cookie, and keep a shared cache for all requests that do not contain an xf_user cookie.

The xf_user only get's initialized in login, you would need to pass-through the requests that contain /login of course.

This is not a perfect solution in the sense that the xf_session would most likely be created (wasted memory), but at least subsequent hits would be served from the cache.

Ketola said:
[*]How about detaching JS and CSS completely from XenForo? What I mean by this is the ability to serve CSS and JS from a separate server with no XenForo installed. Can this be done? I know JS, data and style images can be served from a separate URL, but I'm looking for a way of "pre-rendering" the CSS and serving it from a static server.

JS, completey possible. I am serving all the images from a CDN myself. You need to change your style to instead of images/ it reads from http://yourcdn/images AND I believe in your config.php you can put a $config['externalDataUrl'] AND $config['javaScriptUrl'] configuration

The css.php has the problem of being tight coupled to the templates. If you feel adventurous you can explore doing a mod_rewrite and changing the templates to make sure that the content is cached (it has a parameter key anyway, so you can potentially expire it anytime). This is today a limitation.
 
Thank you very much for your response Salvik and Mr Centaurian!

I just finished doing a similar setup, so let me share my experience

We are using XCache for the opcode caching, over php-fpm. I prefer memcache for the object storage.

You can use memcache to store the session information, the datastore, and to cache other smallish things (like CSS minify), you can either have a separate cluster for memcache, or load them in the boxes themselves (which will prevent you from taking the box completely down). The session can be stored in the database, but memcached seems marginally faster for us.

Sounds like you've chosen the path I'm planning to go, which makes me more confident about my choice. I'll have memcached running on web backends as well as a separate server. As I mentioned, I'm planning on using a redundant memcached pool, but I'm not sure if that can be accomplished with Zend_Cache_Backend. Got to give it a shot once I get the chance.

I'm referring to this: http://serverfault.com/questions/16...ns-be-used-to-share-sessions-more-efficiently

If it doesn't work, we can go with storing sessions in database.


In the config.php you can set $config['data'] AND $config['internal_data'] to point to any folder you want, you can either have these 1) Pointing to your SAN mount or 2) pointing to an NFS mount

If you do an NFS mount, you will need to patch some files (details here: http://xenforo.com/community/threads/mini-bug-attachments-do-not-work-on-nfs.47296/) since the default installation has a bug and can't write to an NFS mount. That will be resolved in a future version.

Oh yes, that's right. No need to NFS mount the data folder in it's actual location in the xenforo folder. Thanks! And also thanks for pointing out your patch. Saves me a headache. =)

I am trying a more advanced setup in which, instead of having the filesystem mounted on 3 servers, I am load balancing attachment.php and some other files to just one particular server so I can concentrate all the uploads in that particular one.

That could also do the trick. I have considered installing XenForo on our static content server as well just to serve attachments and CSS.

IThere is no way, and it is mildly annoying. I have been meaning to put a suggestion for this. This is complicating putting a reverse caching or varnish in front of it, but it is not impossible.

You could configure varnish (or something else) to pass-through any request that contains an xf_user cookie, and keep a shared cache for all requests that do not contain an xf_user cookie.

The xf_user only get's initialized in login, you would need to pass-through the requests that contain /login of course.

This is not a perfect solution in the sense that the xf_session would most likely be created (wasted memory), but at least subsequent hits would be served from the cache.

I'm sure there are valid points for creating a session for each visitor, but it sure makes caching more difficult.

I was going to use xf_user to detect valid logins, but if the user does not check the "Stay logged in" checkbox, no xf_user cookie is created. The xf_session cookie remains as the only xf cookie defined.

JS, completey possible. I am serving all the images from a CDN myself. You need to change your style to instead of images/ it reads from http://yourcdn/images AND I believe in your config.php you can put a $config['externalDataUrl'] AND $config['javaScriptUrl'] configuration

These I have managed to find. Thanks.

The css.php has the problem of being tight coupled to the templates. If you feel adventurous you can explore doing a mod_rewrite and changing the templates to make sure that the content is cached (it has a parameter key anyway, so you can potentially expire it anytime). This is today a limitation.

Thought of this, but the problem with doing a redirect with mod_rewrite is that the initial request will come to our server instead of going straight to the CDN.

It'd be great having an optional prefix for the css.php template - just like JS files have {$javaScriptSource} prefixed to them. It would then be just a matter of having the CDN pull the css.php?foo=bar from the XenForo origin.
 
As I mentioned, I'm planning on using a redundant memcached pool, but I'm not sure if that can be accomplished with Zend_Cache_Backend. Got to give it a shot once I get the chance.

I'm referring to this: http://serverfault.com/questions/16...ns-be-used-to-share-sessions-more-efficiently

If it doesn't work, we can go with storing sessions in database.


Replying to myself. Doesn't look like this can be accomplished with Zend_Cache_Backend. From what I've gathered the memcached pooling and redundancy only works when using PHP's own session storage and defining session.save_handler=memcache.
 
I'm just leaving a cloud hosting and found out that they went in an altered my styles. Now that I'm on a dedicated , I'm discovering why I was having a nightmare trying to figure out where images were always going. They did this to the attachments as well. I've reset the header and a few other places where they pointed static data directories but can't find where they changed the static setting for attachments. I;m still confused so most likely not wording this well.

Can you help?
 
Last edited:
Are you sure they didn't just add something to config.php?

Code:
$config['internalDataPath'] = 'new_internal_data_path';
$config['externalDataPath'] = 'new_external_data_path';
$config['externalDataUrl'] = 'new_external_data_url';

If not, you will need to ask the host.
 
They did that too. No, they took the librety and tweaked my styles. Not impressed. I like this idea but I would have like to have known this instead of the sneaky way of going about it. I spent days trying to resolve images that were broken from changes . It created issues with some scripts and banners not loading. I'm still trying to sort it out.

Anyway. I'll get over it.


The lines they added to my config.php are:

Code:
#$config['externalDataUrl'] = 'http://static.mydomain/data';
#$config['javaScriptUrl'] = 'http://static.mydomain/js';

I can't understand where the attachments went. Are they gone, lost?
 
Last edited:
I understand this whole cloud process better, you got to live in a cloud to know there are pro and cons indeed.
It created a lot of confusion for me, especially not knowing why some directories weren't resolving the way intended. They never told me they were changing my paths.

Glad to be back on a dedicated.
I think I'll make my own cloud now that I see what this is about. Good thread!
 
Last edited:
Our cluster consists of 3 database servers (1 master, 2 slaves) running MySQL 5.5.15 (dual E5620, 32GB RAM, 12*15k RPM SAS drives in RAID6), 4+1 web backends (single X3440, 8GB RAM, 2*7k RPM SATA drives in RAID1), and a couple of other servers for serving static content and proxying the web backends with Varnish.

  1. What would be the best way for setting up XenForo in a cluster of web servers, any of which can go offline at any given time for maintenance etc.? I'm currently considering using XCache for opcode cacing and a distributed & redundant memcached for session storage.
I finished a while ago this CentOS 6 x86_64 minimal setup, for a client:
  • 2 redundant Nginx entry servers
  • 10 PHP-FPM nodes with OPCache
  • 1 MariaDB server
This is the approach I used:
  1. The 2 Nginx servers are redundant and have Memcached installed on each of them. If one fails, the second takes over and sends an email alert to sysadmin. Attachments are stored only on these servers and synced live using an INODE kernel event (NFS will be a disaster on a site like yours). In other words, if a new file is uploaded/modified/deleted on main active server, it will be instantly copied/modified/deleted to failover one. Even if a file is uploaded into node8 (for example), it will be automatically pushed to main Nginx server and never be present on any node. This avoids the useless usage of disk space, sync issues and allows Nginx to serve those static files at blistering speeds. Also, you don't have on hand a SPOF, which is vital for a large site like yours.
  2. PHP-FPM 5.5 and OPCache is installed on 10 nodes running only PHP, as well on 2 redundant Nginx servers. I took advantage of Nginx's capability to cluster all 12 servers and serve PHP as an entity.
  3. MariaDB 5.5 is running on a server with 24 physical cores. Instead of looking at MySQL Cluster or MariaDB Galera (which allows you to have multiple database servers with XenForo and don't worry about the stupid slave replication mess in MySQL), I decided that is best to have one server (I know, SPOF) and save data with incremental backups done every 3 hours.
The above setup allows the site to run smooth with over 30,000 online users and keeps the load on PHP nodes at around 0.8-1, while the database server is idling at 75% (25% used). The transfer rates between Nginx main server and database server are done on dual NIC's with a 70-80MB/s rate.
 
Last edited:
Top Bottom