XenForo high performance benchmarking (What sized server do I need?)

I just thought it would be good to find out if this may be true. And if yes, to pick up his solution in a future XF version: Compile Each Event Listener Type Into Single PHP File
I'm really hoping they work in the "hint" solution I posted there... Because the simplest page on my dev server includes ~200 PHP files... 109 of which are just event listeners that ultimately never trigger anything. Compound that with the fact that things like every load_class_model event listener gets triggered every time ANY model is initiated... A normal page on my dev server ends up firing about 2,400 static event listeners to see if anything should be run... and in the end, only 2 or 3 actually do.

It certainly doesn't make it slow enough that it's unusable, but I'd guess about 15% of the page rendering time involves firing off all these event listeners that really don't need to be.
 
I'm really hoping they work in the "hint" solution I posted there...

I'm hoping it too. It is a simple and clean solution you found for it. Good work, thank you for sharing! We already use it for our own internal addon development.

I was also hoping that Slaviks performance test may be able to show how serious this problem is.
 
I'm really hoping they work in the "hint" solution I posted there... Because the simplest page on my dev server includes ~200 PHP files... 109 of which are just event listeners that ultimately never trigger anything. Compound that with the fact that things like every load_class_model event listener gets triggered every time ANY model is initiated... A normal page on my dev server ends up firing about 2,400 static event listeners to see if anything should be run... and in the end, only 2 or 3 actually do.

It certainly doesn't make it slow enough that it's unusable, but I'd guess about 15% of the page rendering time involves firing off all these event listeners that really don't need to be.

Would you be open to making some artifical way of replicating this? Mainly because I don't want to go down the route of installing 200 different addons :D
 
Would you be open to making some artifical way of replicating this? Mainly because I don't want to go down the route of installing 200 different addons :D
You don't need 200 addons... As an example, I have ~50 addons, and XenForo_CodeEvent::fire() is called 2,808 times on my home page (of which it actually fires off something about 7 times... so it makes ~2,800 checks that ultimately do nothing).

For example, I have a total of 22 load_class_model event listeners... all of which are called every time you initiate a model. So for sake of math, let's say you have a page that uses 10 models... you just called 220 events to see if anything needs to run.

It's really the fact that certain event types (loading classes, template hooks, etc.) run every time a class/template is used... it just exponentially increases the number events being fired (or attempting to fire).
 
Was the only reason for mentioning Linode just to get some RAM quantities to do testing on? I was kind of confused about that part.

Also, will you give some more specs on the hardware like CPU model and RAM speed?
 
Slavik, how are you testing these? I've got a mild server in use, that's not the post potent, but you might want to add it to your list, or I can do the benchmarks if you like? In around February, I'll probably have a more powerful server, with 32GB RAM and an SSD to test things out on.
 
Was the only reason for mentioning Linode just to get some RAM quantities to do testing on? I was kind of confused about that part.

Also, will you give some more specs on the hardware like CPU model and RAM speed?

Linode are a very popular choice for people starting to move into needing their own VPS/server from shared hosting. Therefore looking at their packages seems like a good baseline. I then compared those VPS specs across several other popular hosts and they seem to be representative of most VPS hosting packages. Linode was mentioned just as its a familiar name with familiar package sizes.

Slavik, how are you testing these? I've got a mild server in use, that's not the post potent, but you might want to add it to your list, or I can do the benchmarks if you like? In around February, I'll probably have a more powerful server, with 32GB RAM and an SSD to test things out on.

I'll be firing up multiple AWS instances and generating the load in parallel.
 
I just read your server specs. You should consider changing the HDD with a SSD in one of it.
 
I just read your server specs. You should consider changing the HDD with a SSD in one of it.

SSD's are not realy available en-masse or affordable to many website owners as it currently stands. These tests are to provide a baseline set of results using "standard" hardware. Any upgrades to high performance hardware will obviously provide extra benefits.
 
SSD's are not realy available en-masse or affordable to many website owners as it currently stands. These tests are to provide a baseline set of results using "standard" hardware. Any upgrades to high performance hardware will obviously provide extra benefits.

In a serious server setup a SSD is a must. SSDs starts at $150 (you only need a small one in a server) which is not much more compared to the price of a 1 TB 7200 RPM HDD.
 
In a serious server setup a SSD is a must. SSDs starts at $150 (you only need a small one in a server) which is not much more compared to the price of a 1 TB 7200 RPM HDD.

True, however the people who have the requirements for such performance hardware probably already have such provisions in place, and those people are not representative of the majority.
 
Are 15K SAS drives even worth the money these days?
It depends on what you are doing with them more than anything.

For servers, I like the Seagate 2.5" 10k enterprise drives: http://www.seagate.com/internal-har...rd-drives/hdd/enterprise-performance-10K-hdd/

Only 900GB, but 64MB cache, 204MB sustained data rate and the really important thing for me (a mean time between failure of 2,000,000 hours, which works out to more than 228 years). When you have a bunch of hard drives (I'm getting ready to build a cluster of servers that will have 72 hard drives), reliability becomes more and more important. I don't want to be going to the data center to replace a failed drive more than once every couple years.

At the end of the day, with the drives set up in a RAID-6 (2 spares for extra redundancy), you end up with a real-world read/write rate of about 800MB/sec which is up into the crazy numbers regardless of the hard drive type.
 
interested to see performance-results when havong lots if images-files.....
Well unless you are getting crazy traffic (like maybe 500 *new* [not returning] visitors per second), the hard drive isn't going to make a whole lot of difference. It's not like an image is downloaded from the server every time it's seen (unless the server is really poorly configured).
 
Well unless you are getting crazy traffic (like maybe 500 *new* [not returning] visitors per second), the hard drive isn't going to make a whole lot of difference. It's not like an image is downloaded from the server every time it's seen (unless the server is really poorly configured).

so let's say you have really a lot of images on your website, like for example a website like pinterest.
Is this then a matter of software-capability or rather a matter of server-capacity ?

Would XF-software be able to handle something like this ?
 
It depends on how many users were using it at once... But yes, it's always more of a software issue at re application level... Or at least it should be (caching with things like memcache).
 
Top Bottom