Fastest XenForo site I've seen, how are they so fast?

The site owner of the DIY solar answered my DM, not much detail, but interesting...

...We run a special made software for ours. Apache and the others weren't cutting it for max user connections. The server software was a pain for years. The server team takes care of that...
 
..if you are running apache with something other than the default mod_php ( for example: using php-fpm instead ), you should expect that those individual PHP processes to lag heavily on closing tcp/ip connections.. which can mean that the default linux limitation of 1000 concurrent connections isn't high enough and will max out during peaks of AI scraper traffic.

The default timeout/keepalive and maximum processes for apache on high volume sites is much too low as well.
php-fpm on most applications tends to have 20% slower execution than mod_php.

..because of PHP-FPM's less efficient switching from PHP<->HTML, PHP will also typically run slower on nginx as well.
Nginx' main advantage is sending static files faster.

Mysql's default innodb buffer sizes are also non-ideal out of the box, and increasing them can lead to big boosts in performance over baseline.
Running your database on localhost is also critical for speed, since it will have ~0ms of per call latency.. since PHP is a non-asynchronous single threaded language by default.. a remote database ( even if in the same datacenter! ) will reduce performance of the PHP app proportionally to how many database calls there are on a page.

In AWS, i noticed that if you had an EC2 and RDB in the same datacenter, each database call has 2-3ms of lag before it executes the query.. so for a page with 10 database calls, add 25ms of wait time.
 
Last edited:
..if you are running apache with something other than the default mod_php ( for example: using php-fpm instead ), you should expect that those individual PHP processes to lag heavily on closing tcp/ip connections.. which can mean that the default linux limitation of 1000 concurrent connections isn't high enough and will max out during peaks of AI scraper traffic.

The default timeout/keepalive and maximum processes for apache on high volume sites is much too low as well.
php-fpm on most applications tends to have 20% slower execution than mod_php.

..because of PHP-FPM's less efficient switching from PHP<->HTML, PHP will also typically run slower on nginx as well.
Nginx' main advantage is sending static files faster.

What is the proof for this?

Many benchmarks shows that Nginx with php-fpm is faster than Apache with mod_php and if it goes on higher load then it loses even more.
 
My proof:
I've occasionally done this benchmark and seen that statement to consistently be true over the last few years.
Some PHP applications don't have a drop in performance with PHP-FPM. The majority do.

Nginx with php-fpm is faster than apache at serving static files. See what the parameters of a benchmark are before believing it, too many people like cherry picking.

Apache is by default tuned for low concurrency scenarios and nginx is default tuned for higher concurrency.
Both are tuneable within a few % of each other, performance wise.

I consistently chose apache because mod_php is snappier, and i don't administer any websites whose load is mostly serving static files.

If you wish to do this benchmarking yourself, i recommend you use a tool called 'wrk' which is a multithreaded http benchmarking tool.
This should be available in the ubuntu repos, thus easy to install.

versus something like apachebench, it's multithreaded, so you can perform some really high concurrency benchmarking scenarios. It's also a bit simpler to use than apachbench.

You can perform these tests yourself easily :)
 
Perhaps spin up an XF demo to get a feel for how fast XF Cloud is, given how snappy my forum is. It uses Vultr, so perhaps if you ask the devs what spec virtual server they use for it, they might tell you and you can create a similar one. Cost is a factor of course and I have no idea what the specs of their servers are. I'd be interested to know.

We use vultr high frequency servers for the majority of customers, with a highly optimized software stack including memcached, redis, mariadb, and nginx.
 
Looking at various header information, among other items, DIYSolarForum.com has the following:
  • Is a dedicated or colocated server (or group of servers), does not appear to be on a VPS or cloud hosted machine/network.
  • Hosted on GigeNET (this is such an old named provider in Chicago - they're rock solid).
  • Using Apache. Probably using php-fcgi with fpm - or something along those lines.
  • Leveraging a fair amount of caching and expansive compression wherever possible.
Would be nice to know the hardware specs that the forum (and DB/caching) runs on...
 
..if you are running apache with something other than the default mod_php ( for example: using php-fpm instead ), you should expect that those individual PHP processes to lag heavily on closing tcp/ip connections.. which can mean that the default linux limitation of 1000 concurrent connections isn't high enough and will max out during peaks of AI scraper traffic.

The default timeout/keepalive and maximum processes for apache on high volume sites is much too low as well.
php-fpm on most applications tends to have 20% slower execution than mod_php.

This is completely wrong.

On light loads mod_php is faster, but unlike php-fpm, it scales terribly. Its loading PHP and libraries for every request. PHP fpm uses a process manager that allows it to handle many requests without the larger memory overhead of mod_php, by having PHP processes shared.

Likewise, higher keep alive timeouts is terrible on high volume sites these days. Back during dial-up days when the browser might take a bit to send more requests, it made sense to have higher keep alive not for server load reasons, but to help already terrible latency.

On higher server loads, you want lower keep alives, to free up resources quicker. It's not going to cause more connections, because any PC from the past 10-15 years is going to send multiple requests before keep-alive times out (say 2-3 seconds).

Last, but not least, the worker or event MPMs of Apache are much more reactive, efficient and scale better. Apache recommends php-fpm with the worker or event MPM, not the very old pre-fork MPM that mod_php requires.

If you want the best performance for static and dynamic, Litespeed does a better job than either Nginx or Apache. It will benchmark about the same as Nginx speed wise, but with far less memory usage, ultimately meaning it can scale higher on the same hardware.
 
Ok. You disagree with me. So let's run some quick benchmarks and find out who is right.

Ubuntu 25.04 with PHP 8.4 and Apache and Nginx both with default settings on a Core i5-14500T Dell Micro
Nginx is using PHP-FPM, Apache is using mod_php
Hyperthreading is turned off.

I'm using the wrk tool which is using 14 threads.
helloworld.html is a file with the text 'hello world' in it.
helloworld.php is a file with the text 'hello world' also in it, but the file extension forces PHP to be loaded.

I'm using 100 and 500 concurrency because this is somewhere around the maximum of what >=90% Xenforo sites will see.

Apache 500 concurrency

wrk http://localhost/helloworld.php -c 500 -t 14
Requests/sec: 186048.81

http://localhost/helloworld.html -c 500 -t 14
Requests/sec: 212756.80


Nginx 500 concurrency

wrk http://localhost/helloworld.php -c 500 -t 14
Requests/sec: 57139.04

wrk http://localhost/helloworld.html -c 500 -t 14
Requests/sec: 419169.29


Apache 100 concurrency

wrk http://localhost/helloworld.php -c 100 -t 14
Requests/sec: 173069.76

wrk http://localhost/helloworld.html -c 100 -t 14
Requests/sec: 219708.11


Nginx 100 concurrency

wrk http://localhost/helloworld.php -c 100 -t 14
Requests/sec: 73375.09

wrk http://localhost/helloworld.html -c 100 -t 14
Requests/sec: 414821.27


We can definitely say the overhead of instancing PHP is the smallest with Apache.

In this case the performance differential is much worse than i initially quoted. In real world apps i see typically a 20% slower on a single request basis.
That indicates that not only the initialization, but other interfacing is also slower via PHP-FPM.

Why is it so different? here's my theory.
PHP-FPM: has to manage processes, communicates via unix socket
mod_php: PHP is instanced in the apache process, dies with the apache process, communicates back/forth to the webserver at a lower level than unix sockets

So yeah, PHP-FPM has at least two added sources of overhead, so these numbers come as no surprise.

And as we both expected, nginx is much faster at serving static files.

I'm also not noticing a memory usage spike when running either benchmark.. so i cannot observe this higher memory thing you're talking about. maybe the requests are completing too fast.



For fun here is another test that is basically booting up a PHP framework and outputting hello world.
Zerolith is our high performance competitor to Laravel.
Laravel is the most popular PHP framework.

This is to demonstrate that if your application is slow, even if you ran a webserver written in assembly language, it'd still be slow. No webserver can outrun a slow app.

zerolith vs laravel vs html.webp
 
Last edited:
Respectfully, those benchmarks are flawed with "hello world" tasks. You lose the application overhead on the CPU, which is where event driven models like php-fpm shine. As CPU overhead goes up, event driven response time scales better. That's why Nginx, Litespeed and even Apache php-fpm do better than the Apache prefork and mod_php with real world applications.

Second, prefork and mod_php do not support HTTP/2. This is going to be noticeable, especially on phones. A real world application is going to have other payload in addition to the PHP process, and thus benefit from the multiplexing capabilities of HTTP/2. It allows fewer connections to service the same number of simultaneous users. The lack of HTTP/2 support can be seen by visiting the forum in your signature - it's showing HTTP/1.1.

There are other factors not included in your information, such as if opcache is on, and if so, if it's showing cache hits during subsequent requests. Keep alive, Nginx worker threads, fcgi buffer settings, fpm pool model (dynamic, fixed, etc.), max/min fpm threads, etc settings.
 
Last edited:
I did note the limitations of my test already. I'm not trying to deceive anyone, and the command lines are retained in the benchmark output so you can reproduce the results.

Anything involving FPM, in either nginx or apache is slower for me in a number of applications i've benchmarked and had the same approximate 20% speed reduction to single request time ( which of course adds up as the concurrency increases )

http/2 is great for responsiveness on the first hit. But the cost is that each hit from the PHP backend is slower. I don't like this tradeoff.
Subsequent hits will have the js/css/etc cached, so http/2 would be of lesser benefit.

In my benchmarks, php is using defaults just like fpm, nginx ,and apache.
 
Where is thumped hosted?
OVH in France. It's on a VM running WHM on a Xeon E5-1650v4 - 6c/12t - 3.6 GHz/4 GHz with 64 GB ECC 2133 MHz & NVME storage (one of their cheapest "So You Start" servers, so not exactly high performance) with a handful of other sites also running on it. I have Engintron installed for Nginx and it's all running behind Cloudflare. I just switched to mod_lsapi a couple of days ago and the difference was huge.
 
Last edited:
The single biggest performance imnprovement I've seen lately came from switching from PHP-FPM to mod_lsapi. It's night and day.

(https://thumped.com/bbs)

and that's with 131 addons installed....
Interesting! I'll have to give this a poke on a test environment in the near future. Seeing that there is some sort of module support for nginx ( https://docs.cloudlinux.com/cloudlinuxos/cloudlinux_os_components/#nginx-lsapi-module ), that'll make for a very interesting pairing. Granted, the entire nginx project and that module will need to be compiled from source on a Debian install. Seems that it is a bit limited to CloudLinux / cPanel(?) right now?

Edit: Maybe not fully open source? Having a difficult time finding module support in nginx outside of litespeed's source here: https://github.com/php/php-src/tree/master/sapi/litespeed - which isn't geared towards nginx as far as i can tell?
 
Interesting! I'll have to give this a poke on a test environment in the near future. Seeing that there is some sort of module support for nginx ( https://docs.cloudlinux.com/cloudlinuxos/cloudlinux_os_components/#nginx-lsapi-module ), that'll make for a very interesting pairing. Granted, the entire nginx project and that module will need to be compiled from source on a Debian install. Seems that it is a bit limited to CloudLinux / cPanel(?) right now?

Edit: Maybe not fully open source? Having a difficult time finding module support in nginx outside of litespeed's source here: https://github.com/php/php-src/tree/master/sapi/litespeed - which isn't geared towards nginx as far as i can tell?
I just enabled it in WHM/EasyApache...
 
Back
Top Bottom