• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

Xenforo on Amazon EC2

#1
Has anyone successfully hosted a large install of Xenforo on EC2 across multiple instances? We are currently hosted on a single instance behind ELB (for free SSL), however, we are at the point now where we need to look at adding additional ec2 instances.

We are using RDS, so we would just need to script the process of installing xenforo, and copying any media to the new instances.

Any thoughts?
 

Marcus

Well-known member
#3
My setup are multiple ec2 behind an ELB connecting S3, ES, RDS, Cloudfront (with WAF). You have to put xenforos data and internal_data directories to an external service like AWS S3 (I did so) or AWS EFS.

The "install script" is AWS Opsworks. You can auto-launch new instances when the load increases.

The easiest solution for you is to add AWS Cloudfront to serve static files from its cache so these static file requests rarely touch your server. You get 50GB traffic for free in your first year. This works if you use apache now. If you use nginx the difference isn't so noticeable.
 

Jim Boy

Well-known member
#4
we autoscale up to 8 instances and down to 1 overnight - you save a lot of money doing that.

Run external data off s3 using bd attachment plugin

Set S3 as a stream in your config so that avatars and other external data operate without any add-ons

Use bd attachment store for attachments.

The only internal_data that we 'share' across servers in the proxied images, for which we use EFS, then serve the proxied images via cloudfront (I wrote a plugin for that available in resources). Dont need any other internal data shared.

RDS runs IOPS ebs volumes, but am seriously considering switching to aurora.

Data is maintained between servers via rsync which is triggered on a server launch or when I need something propagated.

I run a standalone small instance that allows php scripts of 512MB to run -any less than that and updates will fail. This box also runs elasticsearch. Consider using a small or medium elasticache server for memcache.
 

Marcus

Well-known member
#5
Hi Jim

Did you test performance for Elasticache vs "set no cache" (meaning xf fetches the sessioms from RDS)?

I store a new application zip each time I upgrade xf or my style, wrote a small bash script for that. All servers then are deleted and I start them new from ground up.
 

Jim Boy

Well-known member
#6
RDS is not an efficient way of doing caching, as it writes everything to disk. Have been happy with my single elasticache node which hasn't missed a beat in over two years.

The application zip approach I suppose would work, but it seems slow. For example if I put in a new add-on, I upload deploy the code to my central server, propogate it out, which only takes a few seconds to deploy to all servers, then run the add-on xml. Whole process takes a couple of miuntes plus the actual rebuilding cache stuff, which takes ten minutes. Servers are prebaked ami's. Once or twice a year I'll update the base ami. System was designed to scale as quickly as possible and saving 30 seconds or a minute is important to us.
 

BoostN

Active member
#7
My setup are multiple ec2 behind an ELB connecting S3, ES, RDS, Cloudfront (with WAF). You have to put xenforos data and internal_data directories to an external service like AWS S3 (I did so) or AWS EFS.

The "install script" is AWS Opsworks. You can auto-launch new instances when the load increases.

The easiest solution for you is to add AWS Cloudfront to serve static files from its cache so these static file requests rarely touch your server. You get 50GB traffic for free in your first year. This works if you use apache now. If you use nginx the difference isn't so noticeable.
How does pricing compare to traditional "Dedicated" boxes?
 

Marcus

Well-known member
#9
I zip and upload the xf install directory as zip to S3 with my script (takes around 0.5 minutes). AWS Opsworks deploys the new app (this zip) to the opsworks instances. It doesnt work automatically yet: in a perfect world it should zip and upload each time there is a style edit etc.

Guests get a Cloudfront cached view (wrote a xf addon to handle caching for Cloudfront).
 
#10
we autoscale up to 8 instances and down to 1 overnight - you save a lot of money doing that.

Run external data off s3 using bd attachment plugin

Set S3 as a stream in your config so that avatars and other external data operate without any add-ons

Use bd attachment store for attachments.

The only internal_data that we 'share' across servers in the proxied images, for which we use EFS, then serve the proxied images via cloudfront (I wrote a plugin for that available in resources). Dont need any other internal data shared.

RDS runs IOPS ebs volumes, but am seriously considering switching to aurora.

Data is maintained between servers via rsync which is triggered on a server launch or when I need something propagated.

I run a standalone small instance that allows php scripts of 512MB to run -any less than that and updates will fail. This box also runs elasticsearch. Consider using a small or medium elasticache server for memcache.

Wondering if you have pulled the trigger on the Aurora move? I am contemplating the same thing @Jim Boy
 
#11
Hi Jim

Did you test performance for Elasticache vs "set no cache" (meaning xf fetches the sessioms from RDS)?

I store a new application zip each time I upgrade xf or my style, wrote a small bash script for that. All servers then are deleted and I start them new from ground up.
@Marcus Any chance you would be able to share the bash script?
 

Jim Boy

Well-known member
#12
Wondering if you have pulled the trigger on the Aurora move? I am contemplating the same thing @Jim Boy
I haven't yet. I have trialled it and it ran really well. It comes down to cost though as the pricing model is very different. Its something you want to configure correctly otherwise you do end up spending more money than you should
 

megabosx

Active member
#13
we autoscale up to 8 instances and down to 1 overnight - you save a lot of money doing that.

Run external data off s3 using bd attachment plugin

Set S3 as a stream in your config so that avatars and other external data operate without any add-ons

Use bd attachment store for attachments.

The only internal_data that we 'share' across servers in the proxied images, for which we use EFS, then serve the proxied images via cloudfront (I wrote a plugin for that available in resources). Dont need any other internal data shared.

RDS runs IOPS ebs volumes, but am seriously considering switching to aurora.

Data is maintained between servers via rsync which is triggered on a server launch or when I need something propagated.

I run a standalone small instance that allows php scripts of 512MB to run -any less than that and updates will fail. This box also runs elasticsearch. Consider using a small or medium elasticache server for memcache.
This is a very cool setup, have you experimented with elastic beanstalk?