I am about to develop a large forum. Can you share specs of your servers? Do you recommend multi core/multi-cpus? I am amazed you are doing it with low specs.
Whare are you getting DBaas? and why not use your own servers for that?
you are using s3 for member files or for xen file storage? if for xen, how do you do that?
Spec's were listed above in our main post. Keep in mind every forum has different requirements. Our forum handles large bursts of traffic when mod packages get uploaded to our gaming community. We require multiple servers for the extended throughput. If we did not have our mod resource, we would probably be using a single server.
XenForo requires their attachments/downloads to be uncompressed/decoded by php (not publicly facing, using <hash>.data); We are considering a different solution that would effectively modify XFRM to make large volumes of downloads less crippling. (proper offloading of s3 to nginx with cache, implementing a queue system, etc).
Right now with CloudFlare we have defined rules that cache attachments (in posts) so it doesn't have to request the data from our origin servers, thus reducing the amount of requests to our infrastructure.
Our approach was to eliminate as many requests as possible from origin and offload it to our CDN/Cache Layer, this is why we are able to run as slim as we do. Bearing in mind, we cannot sustain a major release with these low-specs. We end up scaling our servers anywhere from 8-16GB of ram, and 4-8 cores of cpu performance per node for up to 12 hours when we have a major release upcoming.
S3 stores our internal_data, and our external_data. in two separate buckets to spread out rate limit requests. We use DigitalOcean for our backend so we can leverage the free transit inside the same data-centers.
DBaaS from DigitalOcean is much cheaper in comparison to running our own highly-available database cluster. If we did it ourselves, it would cost us 30-50$/month with 3 servers, not even mentioning the extended amounts of time we would need to spend maintaining and optimizing. Why spend more when there is an already baked solution?
Curious how many max children you allocate per CPU?
This depends on each XenForo installation, you would want to audit your php process via terminal and determine the amount of ram each php-fpm child is using while your forum is active. Our results ended up being 384MB, meaning we can effectively fit 2 children per gb/node while accommodating file sync and nginx.
currently we are experiencing higher than average traffic so we have scaled up to more workers by moving up a node size.
Our end goal is to turn our installation into a kubernetes solution that we can offer.
I split the webserver into several pool.
One php-fpm pool for CPU intensive stuff with fairly short php timeouts which is 1 per CPU, and then another pool for external IO intensive (url unfurl, proxy, signups) with a much higher count and much laxer php request time-outs. I also do stuff like ensure admincp gets it's dedicated php-fpm pool, and different rate limiting per nginx location.
Never thought to separate pools for different functions, Ill have to look further into this as implementing this on our Kubernetes service would probably be extremely beneficial. Thanks for the tip
@Xon