Yup. I know that I have a minority opinion here but in my eyes that is the biggest fail that XF did and does.
Yeah.. i can't think of a single software today that doesn't require bolting on aftermarket software on.
It's a shame, we are going to lose the indie internet over it if someone doesn't intervene.
Very interested to learn about it once you are ready!
Thanks, it would be great to have some people interested in it once i've proven it out on our forum
Install requirements should be:
- Install clickhouse ( we need a hyperfast database that has excellent compression and this is it )
- Install fail2ban ( we use fail2ban to coordinate IPtables bans, which are very fast ) <-- this requirement will be removed in future versions
- Include a .php file ahead of your application's bootloader
- Edit a configuration file that's in PHP array format
Should be super easy to setup, we would provide a fail2ban config file to make that part of the install easier.
Wouldn't that still be limited to the resources the server has if the logic is running on the server? What happens if a rogue bot hits your server with 10,000,000 requests per second? Obviously I don't know anything about it, but seems to me, the better option would be to handle the problem upstream of your origin server so the requests aren't taking network bandwidth or CPU cycles of your server.
I can't think of anything that would run inside the PHP stack (or even upstream within the web server) on the server that would realistically be able to stop the sort of bad traffic my servers see without the servers themselves being wrecked from just needing to run that logic. I see at least 10,000 HTTP requests that need to be blocked per second on my servers (that's at the lowest end of the spectrum).
- the database is extraordinarily fast and PHP doesn't have to wait for writes to complete, and PHP can also analyze the traffic on a background thread. If there is some overhead on the application, it's only a few milliseconds per hit.
- the analysis engine is very fast because we get to lean heavily on clickhouse's fast C++ database.
- with php2ban, you could trap malicious requests much faster and then refer them down to fail2ban which uses the Linux Kernel's IPTABLES system to deal the ban. This is computationally extremely fast, so the limit of what kind of attack you can create is more the amount of bandwidth your server has access to.
- most webhosts have a ~10gb/sec bandwidth pipe and so the system can theoretically withstand somewhere around that size of attack.
So far i have lots of servers running fail2ban with 2 core CPUs and fail2ban is very light.
Only on our Xenforo site, endless sphere, do i see fail2ban consuming a few % of the CPU. This is interesting because fail2ban is written in python and uses sqlite as the database ( which has fast reads but slow writes ), so in theory our system could be more performant than fail2ban since PHP is a few times faster than python and clickhouse should be a few times faster than sqlite in rapid read/write situations.
it helps that our weblogs are prefiltered before they go into fail2ban to isolate for only hits that generate a server load. If you were to feed all the .js/.css/.jpg/etc hits, you'd have a worse time.