I use MySQL Cluster (specifically the ndbcluster storage engine). It lets you do 10s of millions of queries per second (including writes), it's ACID compliant and designed for zero downtime for end users (even for server maintenance/upgrades/backups or unexpected things like servers being unplugged). It was originally designed for telco use to do call logging and routing of phone systems where you can't just take down the database for any reason. It does have high resource requirements though... Like I have my servers interconnected via 54Gbit Infiniband (the bottleneck with pushing around that many queries is communication between nodes).
To get that kind of throughput, by default all data and indexes are stored in-memory with everything being stored on ay least two independent nodes (which is why it survives things like nodes being unplugged or being taken down for maintenance). Having everything in-memory is also needed to get that kind of throughput. All nodes are fully write capable. In my case, I have 8 physical servers with 1TB of RAM each (of which, 256GB on each is allocated to the data node. So I have 2TB RAM allocated to the databases/indexes and then cut that in half because everything is in 2 different physical servers for redundancy... so 1TB "usable". Of which, I'm currently using 83% of that (so I have 174GB available for use). You can also increase/decrease that on the fly.
Code:
-- NDB Cluster -- Management Client --
ndb_mgm> all report mem
Connected to Management Server at: localhost:1186
Node 11: Data usage is 83%(6433357 32K pages of total 7707291)
Node 11: Index usage is 34%(681317 32K pages of total 1955251)
Node 12: Data usage is 83%(6433121 32K pages of total 7707285)
Node 12: Index usage is 34%(681323 32K pages of total 1955487)
Node 13: Data usage is 83%(6436872 32K pages of total 7707208)
Node 13: Index usage is 34%(681400 32K pages of total 1951736)
Node 14: Data usage is 83%(6440679 32K pages of total 7707135)
Node 14: Index usage is 34%(681473 32K pages of total 1947929)
Node 15: Data usage is 83%(6433374 32K pages of total 7707236)
Node 15: Index usage is 34%(681372 32K pages of total 1955234)
Node 16: Data usage is 83%(6433148 32K pages of total 7707302)
Node 16: Index usage is 34%(681306 32K pages of total 1955460)
Node 17: Data usage is 83%(6436924 32K pages of total 7707216)
Node 17: Index usage is 34%(681392 32K pages of total 1951684)
Node 18: Data usage is 83%(6440450 32K pages of total 7707239)
Node 18: Index usage is 34%(681369 32K pages of total 1948158)
MySQL Cluster has different types of nodes... NDB = data node, MGM = management node, API = API node (which is the "normal" mysqld that handles connection to the data nodes for querying data.
Code:
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 8 node(s)
id=11 @192.168.10.20 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 0)
id=12 @192.168.10.21 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 1, *)
id=13 @192.168.10.22 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 2)
id=14 @192.168.10.23 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 3)
id=15 @192.168.10.24 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 0)
id=16 @192.168.10.25 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 1)
id=17 @192.168.10.26 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 2)
id=18 @192.168.10.27 (mysql-5.7.33 ndb-7.6.17, Nodegroup: 3)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.10.20 (mysql-5.7.33 ndb-7.6.17)
id=2 @192.168.10.21 (mysql-5.7.33 ndb-7.6.17)
[mysqld(API)] 33 node(s)
id=21 @192.168.10.20 (mysql-5.7.33 ndb-7.6.17)
id=22 @192.168.10.21 (mysql-5.7.33 ndb-7.6.17)
id=23 @192.168.10.22 (mysql-5.7.33 ndb-7.6.17)
id=24 @192.168.10.23 (mysql-5.7.33 ndb-7.6.17)
id=25 @192.168.10.24 (mysql-5.7.33 ndb-7.6.17)
id=26 @192.168.10.25 (mysql-5.7.33 ndb-7.6.17)
id=27 @192.168.10.26 (mysql-5.7.33 ndb-7.6.17)
id=28 @192.168.10.27 (mysql-5.7.33 ndb-7.6.17)
id=31 @192.168.10.20 (mysql-5.7.33 ndb-7.6.17)
id=32 @192.168.10.21 (mysql-5.7.33 ndb-7.6.17)
id=33 @192.168.10.22 (mysql-5.7.33 ndb-7.6.17)
id=34 @192.168.10.23 (mysql-5.7.33 ndb-7.6.17)
id=35 @192.168.10.24 (mysql-5.7.33 ndb-7.6.17)
id=36 @192.168.10.25 (mysql-5.7.33 ndb-7.6.17)
id=37 @192.168.10.26 (mysql-5.7.33 ndb-7.6.17)
id=38 @192.168.10.27 (mysql-5.7.33 ndb-7.6.17)
id=41 @192.168.10.20 (mysql-5.7.33 ndb-7.6.17)
id=42 @192.168.10.21 (mysql-5.7.33 ndb-7.6.17)
id=43 @192.168.10.22 (mysql-5.7.33 ndb-7.6.17)
id=44 @192.168.10.23 (mysql-5.7.33 ndb-7.6.17)
id=45 @192.168.10.24 (mysql-5.7.33 ndb-7.6.17)
id=46 @192.168.10.25 (mysql-5.7.33 ndb-7.6.17)
id=47 @192.168.10.26 (mysql-5.7.33 ndb-7.6.17)
id=48 @192.168.10.27 (mysql-5.7.33 ndb-7.6.17)
id=51 @192.168.10.20 (mysql-5.7.33 ndb-7.6.17)
id=52 @192.168.10.21 (mysql-5.7.33 ndb-7.6.17)
id=53 @192.168.10.22 (mysql-5.7.33 ndb-7.6.17)
id=54 @192.168.10.23 (mysql-5.7.33 ndb-7.6.17)
id=55 @192.168.10.24 (mysql-5.7.33 ndb-7.6.17)
id=56 @192.168.10.25 (mysql-5.7.33 ndb-7.6.17)
id=57 @192.168.10.26 (mysql-5.7.33 ndb-7.6.17)
id=58 @192.168.10.27 (mysql-5.7.33 ndb-7.6.17)
id=59 (not connected, accepting connect from any host)
I haven't gotten around to upgrading to MySQL 8 yet because it always scares me when everything runs great. And last time I attempted to go to 8.0 (in 2021), I ran into a MySQL bug that forced a rollback to 7.6:
bugs.mysql.com
...before that when I tried in 2020, it was a different bug:
forums.mysql.com
One of these days when I'm feeling brave, I'll try it again... hah
For storage of things that change often but aren't accessed that often relative to things like XenForo's PHP files (user uploaded content like avatars and attachments), I used to use GlusterFS, but I've since moved most of that to Cloudflare R2.
For things that don't change often (PHP files, template/phrase edits, etc.), I use csync2 to keep all the servers synced (having the files locally is faster than networked filesystems like Gluster). The csync2 command is triggered automatically with the
Filesystem addon as needed when something in the code-cache or addon development process changes a file.
In my setup, all servers can do any task... all run Nginx, PHP-FPM, memcached, etc. Since MySQL Cluster is fully write capable on any node, all the web servers simply communicate to MySQL at
localhost
, even though they are physically different servers.