That's a view which neglects that for many it has nothing to do with in-house expertise. For some, its a matter of spending time where it's most cost-effective. I wrote a suite of tools for my sites, including a dynamic robots.txt generator which would create rules based on what was fetching it, and auto blocking if a IP disregarded the rules. I had a setup of smart rules that could also determine if a user agent was being spoofed by crawlers, and much more, along with using prepackaged tools like fail2ban.I'm in the same boat basically. We've got a robust setup as well and a good network border control to ward off the rampage of bots and unwanted crawlers. CF is nice for a plug-n-play where you don't have in-house expertise to do it all. I'm afraid there's too much brain drain going on and there's certainly lot less desire out there to skill-up in the server ops world (os/db/net/app layers). People just want it to work out of the box instead of learning how it actually works. (Great fit for cloud users, turnkey solution and so on...)
I'm using fail2ban here with good success across 30 servers.
It works approximately as well, users don't have to click a captcha, and we don't have outages. I also don't like sending all traffic over to another company - that's a huge privacy violation against my users.
How is it a huge privacy violation versus all the other networks their traffic is already passing through? Are you not using https on all your sites?
We get it. You don't need to mention it in each post!I'm using fail2ban here with good success across 30 servers.
It works approximately as well, users don't have to click a captcha, and we don't have outages. I also don't like sending all traffic over to another company - that's a huge privacy violation against my users.
If you are using cloudflare as an endpoint/gateway protection/proxy your the encryption between the client and your site terminates at Cloudflare - else it could not work. So basically Cloudflare can see everything what your users are doing including conten plus obviously they have all the data to be able to build profiles at scale as they serve a huge proportion of the internet traffic. This isAre you not using https on all your sites?
plus - with Cloudflare being an US-based company - it does not get better.a huge privacy violation against my users.
Because cloudflare terminate the end user's SSL session on their hardware, at that point the traffic is decrypted and plain text. They then proxy the connection back to your servers, probably over HTTPS, but they effectively perform an authorised "man in the middle" (MITM) attack on the connection. It's not dissimilar to that done by lots of corporate network gateways that rely on trusted root certificates installed on staff devices. So Cloudflare can read all the data passing through them - it's one of the reasons they can do all their clever firewall stuff. You could of course design your systems to bypass Cloudflare for some items where you wanted full end-to end encryption, but then you're revealing the location of at least some of your hardware and of course the original use-case for Cloudflare was handling DDOS attacks in which case you want your actual servers to stay hidden. Of course later the WAF and edge caching became major factors in potentially using Cloudflare. The MITM is enough that I don't tend to use Cloudflare, but we do have some clients using it for parts of their business and in front of the stuff we're doing for them.How is it a huge privacy violation versus all the other networks their traffic is already passing through? Are you not using https on all your sites?
They also say the reason for that was a "unusual spike in traffic" of a certain kind, but it was not a deliberate attack against Cloudflare.It'll be interesting to read the full writeup of this issue since we've only got the "file grew too big" information at present.



I'm using fail2ban here with good success across 30 servers.
It works approximately as well, users don't have to click a captcha, and we don't have outages. I also don't like sending all traffic over to another company - that's a huge privacy violation against my users.
Sorry: A statement like that lacks absolutely any competence about privacy, network security and about how the internet works. If this is in fact your level of knowledge you should not offer services on the internet, let alone a forum."huge privacy violation"..
Given this logic, using this forum is a privacy violation,
and using ISPs or the internet at all would be one too.
Sorry: A statement like that lacks absolutely any competence about privacy, network security and about how the internet works. If this is in fact your level of knowledge you should not offer services on the internet, let alone a forum.
If you have indeed been running your own servers on the internet since 35 years, so since 1990, before the www/http was invented: Congratiulations, you must be a veteran. But if you are such a veteran you must know, that your statement above was wrong and primitive trolling or rage bait.Yep. I'm running my own servers for 35 years, so this is definitely true..
If you have indeed been running your own servers on the internet since 35 years, so since 1990, before the www/http was invented: Congratiulations, you must be a veteran. But if you are such a veteran you must know, that your statement above was wrong and primitive trolling or rage bait.


The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.
The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.
After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file. Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.
We are sorry for the impact to our customers and to the Internet in general. Given Cloudflare's importance in the Internet ecosystem any outage of any of our systems is unacceptable. That there was a period of time where our network was not able to route traffic is deeply painful to every member of our team. We know we let you down today.
This post is an in-depth recount of exactly what happened and what systems and processes failed. It is also the beginning, though not the end, of what we plan to do in order to make sure an outage like this will not happen again.
We use essential cookies to make this site work, and optional cookies to enhance your experience.