Xenforo on Amazon EC2

Timelord_

Member
Has anyone successfully hosted a large install of Xenforo on EC2 across multiple instances? We are currently hosted on a single instance behind ELB (for free SSL), however, we are at the point now where we need to look at adding additional ec2 instances.

We are using RDS, so we would just need to script the process of installing xenforo, and copying any media to the new instances.

Any thoughts?
 
My setup are multiple ec2 behind an ELB connecting S3, ES, RDS, Cloudfront (with WAF). You have to put xenforos data and internal_data directories to an external service like AWS S3 (I did so) or AWS EFS.

The "install script" is AWS Opsworks. You can auto-launch new instances when the load increases.

The easiest solution for you is to add AWS Cloudfront to serve static files from its cache so these static file requests rarely touch your server. You get 50GB traffic for free in your first year. This works if you use apache now. If you use nginx the difference isn't so noticeable.
 
we autoscale up to 8 instances and down to 1 overnight - you save a lot of money doing that.

Run external data off s3 using bd attachment plugin

Set S3 as a stream in your config so that avatars and other external data operate without any add-ons

Use bd attachment store for attachments.

The only internal_data that we 'share' across servers in the proxied images, for which we use EFS, then serve the proxied images via cloudfront (I wrote a plugin for that available in resources). Dont need any other internal data shared.

RDS runs IOPS ebs volumes, but am seriously considering switching to aurora.

Data is maintained between servers via rsync which is triggered on a server launch or when I need something propagated.

I run a standalone small instance that allows php scripts of 512MB to run -any less than that and updates will fail. This box also runs elasticsearch. Consider using a small or medium elasticache server for memcache.
 
Hi Jim

Did you test performance for Elasticache vs "set no cache" (meaning xf fetches the sessioms from RDS)?

I store a new application zip each time I upgrade xf or my style, wrote a small bash script for that. All servers then are deleted and I start them new from ground up.
 
RDS is not an efficient way of doing caching, as it writes everything to disk. Have been happy with my single elasticache node which hasn't missed a beat in over two years.

The application zip approach I suppose would work, but it seems slow. For example if I put in a new add-on, I upload deploy the code to my central server, propogate it out, which only takes a few seconds to deploy to all servers, then run the add-on xml. Whole process takes a couple of miuntes plus the actual rebuilding cache stuff, which takes ten minutes. Servers are prebaked ami's. Once or twice a year I'll update the base ami. System was designed to scale as quickly as possible and saving 30 seconds or a minute is important to us.
 
My setup are multiple ec2 behind an ELB connecting S3, ES, RDS, Cloudfront (with WAF). You have to put xenforos data and internal_data directories to an external service like AWS S3 (I did so) or AWS EFS.

The "install script" is AWS Opsworks. You can auto-launch new instances when the load increases.

The easiest solution for you is to add AWS Cloudfront to serve static files from its cache so these static file requests rarely touch your server. You get 50GB traffic for free in your first year. This works if you use apache now. If you use nginx the difference isn't so noticeable.

How does pricing compare to traditional "Dedicated" boxes?
 
I zip and upload the xf install directory as zip to S3 with my script (takes around 0.5 minutes). AWS Opsworks deploys the new app (this zip) to the opsworks instances. It doesnt work automatically yet: in a perfect world it should zip and upload each time there is a style edit etc.

Guests get a Cloudfront cached view (wrote a xf addon to handle caching for Cloudfront).
 
we autoscale up to 8 instances and down to 1 overnight - you save a lot of money doing that.

Run external data off s3 using bd attachment plugin

Set S3 as a stream in your config so that avatars and other external data operate without any add-ons

Use bd attachment store for attachments.

The only internal_data that we 'share' across servers in the proxied images, for which we use EFS, then serve the proxied images via cloudfront (I wrote a plugin for that available in resources). Dont need any other internal data shared.

RDS runs IOPS ebs volumes, but am seriously considering switching to aurora.

Data is maintained between servers via rsync which is triggered on a server launch or when I need something propagated.

I run a standalone small instance that allows php scripts of 512MB to run -any less than that and updates will fail. This box also runs elasticsearch. Consider using a small or medium elasticache server for memcache.


Wondering if you have pulled the trigger on the Aurora move? I am contemplating the same thing @Jim Boy
 
Hi Jim

Did you test performance for Elasticache vs "set no cache" (meaning xf fetches the sessioms from RDS)?

I store a new application zip each time I upgrade xf or my style, wrote a small bash script for that. All servers then are deleted and I start them new from ground up.
@Marcus Any chance you would be able to share the bash script?
 
Wondering if you have pulled the trigger on the Aurora move? I am contemplating the same thing @Jim Boy
I haven't yet. I have trialled it and it ran really well. It comes down to cost though as the pricing model is very different. Its something you want to configure correctly otherwise you do end up spending more money than you should
 
we autoscale up to 8 instances and down to 1 overnight - you save a lot of money doing that.

Run external data off s3 using bd attachment plugin

Set S3 as a stream in your config so that avatars and other external data operate without any add-ons

Use bd attachment store for attachments.

The only internal_data that we 'share' across servers in the proxied images, for which we use EFS, then serve the proxied images via cloudfront (I wrote a plugin for that available in resources). Dont need any other internal data shared.

RDS runs IOPS ebs volumes, but am seriously considering switching to aurora.

Data is maintained between servers via rsync which is triggered on a server launch or when I need something propagated.

I run a standalone small instance that allows php scripts of 512MB to run -any less than that and updates will fail. This box also runs elasticsearch. Consider using a small or medium elasticache server for memcache.
This is a very cool setup, have you experimented with elastic beanstalk?
 
@Marcus Any chance you would be able to share the bash script?
Obviously quite late to the party, but here's a user data script that I wrote a while back to get a base Amazon Linux AMI up and ready to serve WordPress content. I would probably use NGINX today, but you at least get the idea.

Code:
#cloud-config
repo_update: true
repo_upgrade: all

packages:
 - httpd24
 - php70
 - php70-mysqlnd
 - php70-imap
 - php70-opcache
 - php70-gd
 - php70-tidy
 - php70-zip
 - gd
 - gd-devel
 - cachefilesd

runcmd:
 - chkconfig httpd on
 - chkconfig cachefilesd on
 - groupadd www
 - [ sh, -c, "usermod -a -G www ec2-user" ]
 - [ sh, -c, "usermod -a -G www apache" ]
 - [ sh, -c, "sed -i '/Options FileInfo AuthConfig Limit/,/Controls who can/ s/AllowOverride None/AllowOverride All/g' /etc/httpd/conf/httpd.conf" ]
 - [ sh, -c, "aws s3 cp s3://s3-bucket/webroot.tar.gz /tmp/" ]
 - [ sh, -c, "tar xzf /tmp/webroot.tar.gz -C /var/www/html/" ]
 - service cachefilesd start
 - service httpd start
 
After running it for around 6 weeks now I am deeply dissatisfied about the performance / price of AWS.

For my Xenforo forums I now run a m4.large EC2 instance (with load balancers in front of) and a m4.large (2 virtual modern CPUs with 8GB RAM) RDS instance with mysql. RDS size is 200GB to have enough space for shifting.

Currently CPU of RDS is used for around 30-40% with 250-300 users at the same time, forums is very fast, IF they use the forums camly – as “normal” users normally browse a forums. As soon as these users become more fiercely (this happens often in my forums topic, means heavy writing and searching because of few very hot threads) and/or of the uses count rises to around 400-500 the forums becomes laggy, CPU usage rises to 70, 80, 90% (upto 100%) with page loading times upto one minute or white 503 errors. EC2 CPU seems to be not the problem with CPU loads around 50-60% maximum.

So, the performance of the RDS is MUCH too small. I do expect upto 2000 users or more in very few weeks (high season). I cannot image which instance type I should book (and pay!) to serve these many wild users. I do use extended search and +20 addons. Although they probably add to the RDS usage I still think the RDS is just too small. The change from t2.micro to m4.large (doubling CPU, RAM, network) gave a tiny performance increase, but this was only very few percent. By far not 25-50% or so.

Can you please compare to your experience? What do I miss, what am I doing fundamentally wrong, what can I do please – or do I have to pay 500-1000 $/month to get more performance? I still do not use cloudflare, I'll use soon - but I would not think that it dramatically improves RDS performance...
 
For my Xenforo forums I now run a m4.large EC2 instance (with load balancers in front of) and a m4.large (2 virtual modern CPUs with 8GB RAM) RDS instance with mysql. RDS size is 200GB to have enough space for shifting.
For the web servers you really should use c4 or c5 instances IMO. With RDS - I/O is especially critical, you may want to be using IOPs backed storage. Check your I/O stats on the RDS instance, you probably have a bottle neck here
Currently CPU of RDS is used for around 30-40% with 250-300 users at the same time, forums is very fast, IF they use the forums camly – as “normal” users normally browse a forums. As soon as these users become more fiercely (this happens often in my forums topic, means heavy writing and searching because of few very hot threads) and/or of the uses count rises to around 400-500 the forums becomes laggy, CPU usage rises to 70, 80, 90% (upto 100%) with page loading times upto one minute or white 503 errors. EC2 CPU seems to be not the problem with CPU loads around 50-60% maximum.
Get autoscaling going for your ec2 instances. However it does sound like the DB is the issue - probably IO related
So, the performance of the RDS is MUCH too small. I do expect upto 2000 users or more in very few weeks (high season). I cannot image which instance type I should book (and pay!) to serve these many wild users. I do use extended search and +20 addons. Although they probably add to the RDS usage I still think the RDS is just too small. The change from t2.micro to m4.large (doubling CPU, RAM, network) gave a tiny performance increase, but this was only very few percent. By far not 25-50% or so.
I think you are on the right instance size now - we are on a mx.xlarge and handle 10,000 concurrent users, so the next size down should be fine for you. Again though - fix your I/O!
 
Thank you so much, @Jim Boy

Can you recognize something in these dashboards?

Heavy breakedown was on 11-05, at 19-20h UTC. That was with +450 users in forums, around 100 of them visiting one thread, hitting reload every few seconds and writing dozends of posts (they were awaiting the results of a competition ;)).

On 11-08 it became critical and veeery slow again, but not fully till brakedown. See data in second screen, please.

We changed RDS by chance from t2.micro to m4.large just the day before on 11-04 and got rid of the credits because of that. But as you can see CPU usage was very similar after doubling hardware! I would have expected at least 30% improvement.

We increased to 200GB instance size some days later but that did not improve anything noticable.

We are currently using the general purpose SSD. Do you mean the provisioned IOPS SSD to increase to IOPs backed storage? That would double pure RDS costs from around $150 to $300/month alone, plus size increase, EC2 and so on...

Any hint is highly appreciated, thank you again!


db_0511-2017.webp

db_0811-2017.webp
1511295276699.webp
 
Last edited:
It is a little hard to tell with those graphs - over such a long period cloudwatch will tend to flatten things and round off the peaks. It is better to drill down to the critical times. I'd be a bit concerend about that IOPS peak on the 5th.

A couple of things though - with 200GB of disk space, you'll get a baseline of 600 IOPS burstable to 3000, are you hitting any limits within that period?

If so, maybe an errant plugin could be at fault - enable debug and review the sql statements, look for anything that takes significantly longer than other queries or pulls large amounts of data needlessly. Also it does pay to tune your RDS parameters, if you have gone with the default values, while they are largely good, but they don't know your use case, so you will get a performance hit.
 
Back
Top Bottom