• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

how to set up XF for load balancing?

My site gets a moderate amount of traffic, but occasionally there's an event or some news and I get a ton of traffic. I started using cloudflare, and that helps, but how easy is it to split the load on Xenforo among servers, and have an extra server spin up when the load gets heavy?

I'm on vB4 now but will be migrating to XF soon. These kind of events are expected to happen again, as well as an expected jump in members and activity in the coming months, as a new product gets rolled out. So I'd like to get the server configuration figured out with scalability in mind before the migration.



Well-known member
A. Setup a load balancing nginx server

B. For multiple php application servers:

1. You would need to find a solution for both
- /data folder
- /internal_data folder

2. Get the sessions from an external source:
- from database (xenforo does that out of the box)
- or from an external memcached server

Thats all. There is not more to do.

C. It is obvious you need to put the database on an external server, too. You can also setup multiple read replicas of the master database.


Active member
Redis doesn't really go across multiple boxes. It's single thread, single box. It's primary advantage is that it's 5 times faster than memcached, per thread. If you only have a single box, it works great.

You can put redis on a single box, and make all PHP processes call it easily enough though.

For /data and /internal_data... distributed file system? I don't really know what would go best here.

Alteran Ancient

Well-known member
Databases are fun. You could have an external DB server, but that will not ensure high availability, and will also increase your load times because of the connection overhead. Thankfully, there are ways to solve this! If you use something like Percona XtraDB, you can have a local read server on every one of your nodes and a write master on just one of them. Then you can have "keepalived" running and waiting for your write master to become unavailable. If/when it does become unavailable, you can have the failover kick-in and re-assign one of your read slaves to become the write master until the original comes online again.

For /data and /internal_data... distributed file system? I don't really know what would go best here.
NFS will take care of that for you at the expense of high availability. I'm generally not the biggest fan of NFS because if your application depends on it (XenForo would if you used it for the data directories) then if one of your nodes goes offline, NFS becomes unavailable and thus your service goes down. There are ways to counter this, but it is a lot of hassle to set up.

Option 2 is to use RSYNC. It is fast and pretty simple to use and will make sure that your folders will stay in sync across all servers. It becomes more complicated the more nodes you have, because it will have to replicate any changes to all other destinations, plus you'll need an "on-change" script or cron job to actually run rsync to make sure your changes are synced. The downside is that it is not as real-time as NFS is, but it would be easier to use this if you also wanted high availability.

As another option, you could use S3 to store attachments and other user data. There are add-ons for XenForo that will enable you to do this.