Known Bots

Known Bots 6.1.0

No permission to download
Don't forget that identifying and processing bots in the forum software will require increasing levels of resources - and thus cost to you directly. Outsourcing that to a 3rd party (Cloudflare or otherwise) prevents the traffic from even getting to your server - thus requiring fewer resources to process. This is why DDoS protection should be done externally - not on-server.

PS. if you want to talk ISO model stuff - bots are generally an Application Layer level thing - they use HTTP and DNS and act like regular web browsers. Network Layer stuff is much, much lower in the stack and more typically associated with firewalls and mitigating DDoS attacks rather than higher level bot traffic.
I do have a slightly different opinion here. Clearly, for DDOS protection you need something that the requests have to pass before they reach the actual forum and it better be on a different host. One may name it Netfilter, Firewall, Loadbalancer, Reverse Proxy, Gateway or whatever name one likes - the marketing of solution providers has scrambled those names anyway chaotically. One can use Cloudflare for that (and many people do).
Obviously, this kind of "thingy" can block or reroute any network traffic you want, classically based on source and destination. That is what a classic firewall does. With stateful inspection (developed by Checkpoint in the 90ies) you get at least some intelligence to that relative primitive process and with deep packet inspection even more - the latter unfortunately breaks privacy. And indeed it needs a considerable amount of computing resources with heavy load.

While it is highly desirable to do your filtering on a different physical host than the one that runs your forum for a bunch of reasons, this seems pretty demanding for small forums/websites and the people running them. If you rule out cloudflare (i.e. for privacy reasons) like I do and can live w/o DDOS protection (like I do) you can still have the rest and run your firewall on the same machine. Maybe in form of docker instances or virtual hosts or even on the same host. So you could have i.e a setup:

incoming traffic -> iptables (or similar) -> apache/nginx -> Xenforo

If you are on shared hosting like me you typically can't use/configure iptables or the webserver config and are limited to things like .htaccess or XenForo itself. While not optimal from a performance and architecture point of view in many cases sufficient (but annoying to maintain, does not scale properly with performance issues as a consequence with many rules in place, barely useful logging, so bugfixing quickly becomes a nightmare).

However: The Firewall or Gateway, no matter where it runs, does follow rules. Some of those are easy (like only allowing certain ports or not allowing certain ip-addresses), some a little more complex. And this is where something comes into play what a solition provider would probably market as "threat intelligence": The rules the firewall executes have to come from somewhere. There are some which you can define w/o thinking and Cloudflare claims to do a lot of magic (and maybe it does). Plus a lot of this magic comes from statistical data, based on the very huge amount of hosts that use cloudflare. This way cloudflare is able to discover patterns in worldwide internet traffic and to react to that. This is way more than you can do yourself and no doubt, if they do their homework properly, can offer good protection in many cases.

However: What they probably can't and don't do is to discover patterns specific for your host. I.e. the biggest threat for forums at the moment are AI bots that scrape forum content. Yesterday I had a couple of thousand requests from Vietnam, coming from a couple of hundred different IPs, slowly but steadily crawling through my forum, using a ton of different user agents until I locked them out.

Code:
"Mozilla/5.0(iPhone;CPUiPhoneOS17_3likeMacOSX)AppleWebKit/605.1.15(KHTML,likeGecko)AvastSecureBrowser/5.3.1Mobile/15E148
"Mozilla/5.0(Macintosh;IntelMacOSX10_15)AppleWebKit/605.1.15(KHTML,likeGecko)Version/17.0DuckDuckGo/7Safari/605.1.15"10234684
"Mozilla/5.0(Linux;Android11;motoe20Build/RONS31.267-94-14)AppleWebKit/537.36(KHTML,likeGecko)Chrome/122.0.6261.64MobileSafari/537.36"
"Mozilla/5.0(WindowsNT10.0;Win64;x64;rv:123.0)Gecko/20100101Firefox/123.0"9894684
"Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/121.0.0.0Safari/537.36Edg/121.0.0.0Agency/98.8.8188.80"
"Mozilla/5.0(iPhone;CPUiPhoneOS17_1likeMacOSX)AppleWebKit/605.1.15(KHTML,likeGecko)EdgiOS/119.0.2151.65Version/17.0
"Mozilla/5.0(Linux;Android10;K)AppleWebKit/537.36(KHTML,likeGecko)Chrome/122.0.0.0MobileSafari/537.36"10564684
"Mozilla/5.0(iPhone;CPUiPhoneOS17_0likeMacOSX)AppleWebKit/605.1.15(KHTML,likeGecko)EdgiOS/116.0.1938.72Version/17.0
"Mozilla/5.0(Linux;Android10;K)AppleWebKit/537.36(KHTML,likeGecko)Chrome/122.0.0.0MobileSafari/537.36"10064684
"Mozilla/5.0(iPad;CPUOS17_0likeMacOSX)AppleWebKit/605.1.15(KHTML,likeGecko)Version/17.0Mobile/15E148Safari/604.1"
"Mozilla/5.0(iPhone;CPUiPhoneOS17_0likeMacOSX)AppleWebKit/605.1.15(KHTML,likeGecko)EdgiOS/116.0.1938.79Version/17.0
"Mozilla/5.0(Linux;Android10;K)AppleWebKit/537.36(KHTML,likeGecko)Chrome/122.0.0.0MobileSafari/537.36"10184684
"Mozilla/5.0(iPhone;CPUiPhoneOS17_0likeMacOSX)AppleWebKit/605.1.15(KHTML,likeGecko)EdgiOS/119.0.2151.105Version/17.0
"Mozilla/5.0(iPhone;CPUiPhoneOS17_1likeMacOSX)AppleWebKit/605.1.15(KHTML,likeGecko)EdgiOS/121.0.2277.107Version/17.0
"Mozilla/5.0(WindowsNT10.0;Win64;x64)AppleWebKit/537.36(KHTML,likeGecko)Chrome/121.0.0.0Safari/537.36Edg/121.0.0.0"
"Mozilla/5.0(Macintosh;IntelMacOSX10_12_5;rv:123.0esr)Gecko/20100101Firefox/123.0esr"

I was able to detect them by finding the pattern: Loads of visits from vietnam (unusual for my German language forum) that in parallel with different IPs visited old threads, often with different IPs visiting the same threads. In this case it was easy as they were all coming from VNPT Corp (Vietnam Posts and Telecommunications Group), mostly ASN45899, using static and dynamic IPs. While this pattern is very uncommon for my forum it would probably not be uncommon for other forums or websites and thus be below the radar of cloudflare - the more, as in absolute numbers it was not a huge "attack". Thus with Clouldflare, due to a lack of rules, I would probably have gone unprotected".
Manually on the other hand I saw an amount of guests on my forum that was unusually high in relation to registred users:

Bildschirmfoto 2025-11-15 um 16.33.11.webp

After blocking ASN45899 it went down instantly within a couple of minutes:

Bildschirmfoto 2025-11-15 um 16.49.46.webp

However: This needed active monitoring and manual work. I'd love to have an automation for these kind of things - no matter where the actual blocking happens. Cloudflare on itself can't do that as they probably cannot identify the pattern I discovered.

So a little more intelligence within XenForo to identify threats and then trigger the according actions, not necessarily within XenForo but i.e. within Cloudflare, IP-Tables etc. would be helpful. Maybe a bit like fail2ban works, but based on the actual forum and it's content. This could be an area, where AI could actually be helful.

Yes, this would cost resources - but it would gain a huge benefit that can't be gained otherwise and I would be willing to pay that price.

Regarding the bot detection in XenForo: I like it but it is clearly very far from perfect: It seems simply to rely on the useragent that is submitted plus it does not recognize a load of bots. This is obviously not your fault, but it is simply not sufficient and you can't do too much with it anyway and even less in a comfortable way.
What I'd i.e. like would be able to flag certain bots as "ok" and others as "questionable" and others as "unwanted". These could i.e. show up with green/yellow/red bubbles in the list and/or be filterable. If the "unwanted" ones could be forwareded to some instance (like with a "block bot" button) that would automatically create a rule for them in the gateway (whatever gateway one uses) it would become really useful.

I've recently learned about "Resident Proxies" and still lack an idea on how to get rid of those - this seems to be an issue that possibly a solution like Cloudflare could detect and block easier than others.
 
Last edited:
This is what we have.

Code:
<IfModule mod_setenvif.c>
  <Location />

    # Headless / automation browsers
    SetEnvIfNoCase User-Agent "(selenium|puppeteer|phantomjs|phantom|playwright(-chromium|(-webkit)|(-firefox)?)?|headlesschrome|cypress|chromiumbot|headlessbot|slimerjs|triflejs|TestCafe|Nightwatch|WebDriverIO|Taiko|RobotFramework|Protractor|Nightmare|CasperJS|ZombieJS|Splash|HtmlUnit|WebKitTestRunner)" bad_bots

    # AI / content / data crawlers
    SetEnvIfNoCase User-Agent "(Meta-ExternalAgent|Bytespider|Barkrowler|omgilibot|Dotbot|BLEXBot|Amazonbot|PetalBot|AspiegelBot|MauiBot|AwarioBot|RiverBot|ThinkBot|ThinkChaos|KingBot|nbertaupete95)" bad_bots

    # Security scanners & network probes
    SetEnvIfNoCase User-Agent "(AliyunSecBot|ZoomEye|Censys|CensysInspect|NetSystemsResearch|BinaryEdge|ProjectDiscovery|Nuclei|Nikto|Wapiti|Skipfish|SecurityResearchBot|ISSCyberRiskCrawler|SurdotlyBot|Acunetix|Netsparker|OpenVAS|QualysGuard|sqlmap|w3af|arachni|DetectifyBot|Intruder|Pentest-Tools.com|Shodan|WhatWeb|DirBuster|dirsearch|FFUF|Gobuster|GoSpider|ZAP|OWASPZAP)" bad_bots

    # Scraping / HTTP libraries
    SetEnvIfNoCase User-Agent "(aiohttp|httpx|Go-http-client|axios|okhttp|urllib|requests-html|http.rb|rest-client|HttpClient|libwww-perl|mechanize|typhoeus|faraday|Scrapy|Java/|Apache-HttpClient|LWP::Simple|LWP::UserAgent)" bad_bots

    # Low-level / legacy HTTP clients & scanners
    SetEnvIfNoCase User-Agent "(masscan|zgrab|zmap|curl-power|wget|python-urllib2|python-urllib/0\.|python-urllib3/0\.|HTTrack|WWW-Mechanize|lwp-trivial|furl|go-http-client/0\.)" bad_bots

    <RequireAll>
      Require all granted
      Require not env bad_bots
    </RequireAll>

  </Location>
</IfModule>
 
Last edited:
However: What they probably can't and don't do is to discover patterns specific for your host. I.e. the biggest threat for forums at the moment are AI bots that scrape forum content. Yesterday I had a couple of thousand requests from Vietnam, coming from a couple of hundred different IPs, slowly but steadily crawling through my forum, using a ton of different user agents until I locked them out.

But that's exactly what Cloudflare's system does - because they front-end a large number of sites, including forums - they are able to identify bot-like patterns of behaviour and block them for everyone at once via their bot detection systems.

Indeed - they have been quiet open about this being one of the prime reasons they have a free plan - because it gives them a much larger surface area for identifying patterns that they otherwise would not have access to if they hid all their functionality behind paid plans.

To expect that to happen at a point closer to your forum - or even in software - is completely unworkable at scale, because not everyone has the time or knowledge to track or identify this information. Would you rather the XenForo devs spent their time building out some bot management system that is already provided for free by Cloudflare - or would you prefer them to focus on building the best forum software out there?

The issues you raise are not unique to forums - all websites have issues with bot scrapers - so a generic higher-level solution (eg Cloudflare's bot management) is going to be far more useful than a forum-specific one.

Regarding the bot detection in XenForo: I like it but it is clearly very far from perfect: It seems simply to rely on the useragent that is submitted plus it does not recognize a load of bots. This is obviously not your fault, but it is simply not sufficient and you can't do too much with it anyway and even less in a comfortable way.

To be clear - the bot detection in XenForo is extremely simplistic. It is solely based on an assumption that bots will adequately identify themselves (and not pretend to be something they're not - eg GoogleBot), which we all know is completely untrue for a large number of bots out there which deliberately hide their identity.

Bots are being created faster than I can identify them - and I can't do anything about the other bots in any case. Even in your situation with the bot-like-behaviour coming from certain IP addresses, I can guarantee you that the behaviour would likely stop after a short time and move to a different set of IP addresses - so blocking them at that level is likely pointless as a long term solution in any case. Like I said - whack-a-mole.

What I'd i.e. like would be able to flag certain bots as "ok" and others as "questionable" and others as "unwanted". These could i.e. show up with green/yellow/red bubbles in the list and/or be filterable. If the "unwanted" ones could be forwareded to some instance (like with a "block bot" button) that would automatically create a rule for them in the gateway (whatever gateway one uses) it would become really useful.

How would you know which are ok or questionable? What criteria could you possibly use to determine that? How is that in any way a function that should be built into forum software (or any software for that matter)?

Given a large number of forum operators have no idea about how to get email delivered reliably, how effective do you think a forum-based solution for managing bots at a lower level than the user-agent identification would even be?

You're describing quite sohpisticated solutions which might work ok for you, but which are completely inappropriate for the average forum administrator who wouldn't know a firewall from a load balancer.

If only there were a free solution that did all this for you already with a point-and-click user interface that is easy to navigate 🤔
 
Last edited:
  • Like
Reactions: ENF
Which ones in particular are giving you trouble? Getting rid of Bytespider made a big difference for me. I think I added Amazonbot to the list as well as Alexa was getting pesky.
Tons. From Amazon to Facebook to meta-externalagent and more that come in big waves, crawling everything. And always new ones popping up like now this aiohttp Python bot. I put them in discourage mode. Like I said earlier, I got rid of Bytespider that way. My robots.txt file is as long as my leg...

And if you're not using Cloudflare turnstile then it adds that as well - and that makes a massive difference in spam protection (and spots bots to prevent them registering or using the contact us email etc).
I use a Q&A that's very effective. Bots haven't cracked it yet so I have only had like 2 fake registrations since this Summer and they weren't bots.

Having said that and after what Sim said, I am now seriously considering switching to Cloudflare.
 
Tons. From Amazon to Facebook to meta-externalagent and more that come in big waves, crawling everything. And always new ones popping up like now this aiohttp Python bot. I put them in discourage mode. Like I said earlier, I got rid of Bytespider that way. My robots.txt file is as long as my leg...
As said before: Robots.txt won't help against bad bots. Putting IPs in discourage mode won't help either, as many of them switch their IPs quickly.
I did a monthlong journey trying to get rid of unwanted bots - of those discovered by known bots for month there are only those left that I tolerate - maybe every two weeks a new one shows up and gets either tolerated or blocked via .htaccess. No issue and barely any work or time involved any more. The way bigger issue are the bots that are not discovered by known bots..
Having said that and after what Sim said, I am now seriously considering switching to Cloudflare.
If you don't have access to your .htaccess file due to being on XF cloud (and therefore having no other toolkit that you can use as well) you probably don't have much choice - I dislike cloudflare for a bunch of reasons (but have admittedly no own experience with it) and from what other users say plus what you can see on this forum here the protection against the actual wave of distributed scraper bots seems to leave a lot to be desired. Better than no protection, but leaves a lot to be desired.
Being basically forced into Cloudflare does make XF Cloud somewhat unattractive in my eyes.
Yesterday I had a couple of thousand requests from Vietnam, coming from a couple of hundred different IPs, slowly but steadily crawling through my forum, using a ton of different user agents until I locked them out.

Same here. And I can't block them.
Yesterday the scrapers switched countries and came mainly from Argentina, this time using a ton of different networks. ;)
 
But that's exactly what Cloudflare's system does - because they front-end a large number of sites, including forums - they are able to identify bot-like patterns of behaviour and block them for everyone at once via their bot detection systems.
That's what they do and what is their advantage - however: Cloudflare cannot find patterns that are specific to your forum as they don't know your forum well enough. With the way they work there will - by design - always be a gap between what they are able to discover with their methods and what you are able to discover yourself on your own forum with sorrowful monitoring of the traffic. Both ways aggregated would probably be best - but as far as I know when using cloudflare you are loosing some information about visitors in your forum's logfiles and are no longer able to due the digging - that is one of the reasons why I don't like cloudflare.
Indeed - they have been quiet open about this being one of the prime reasons they have a free plan - because it gives them a much larger surface area for identifying patterns that they otherwise would not have access to if they hid all their functionality behind paid plans.
As in many "free" offers it isn't free: You pay with your data, you limit your own possibilities (and longterm also abilities) and you become dependent from a company that is on the best way to become a monopoly much in the way that Google is one in a different sector. A dangerous route, if you ask me.
To expect that to happen at a point closer to your forum - or even in software - is completely unworkable at scale, because not everyone has the time or knowledge to track or identify this information. Would you rather the XenForo devs spent their time building out some bot management system that is already provided for free by Cloudflare - or would you prefer them to focus on building the best forum software out there?
Again: These are different things, it is not the same. I expect a modern forum software to make my life easier. While some want more chichichi or more social media I want more solid protection against the biggest threat for forums today. It is not eihter or - at least it shouldn't be. If XF want to build a decent product they simply cannot ignore the threats and issues forums have today. A theoretical discussion anyway as XF fail to ship anything anyway. Plus obviously it could also be done via an add on.
The issues you raise are not unique to forums - all websites have issues with bot scrapers - so a generic higher-level solution (eg Cloudflare's bot management) is going to be far more useful than a forum-specific one.
I doubt that. Cloudflare can due to the large amount of worldwide data do things that I cannot do, while with decent knowledge about your own forum or website and it's visitors you can do a lot that cloudflare cannot do.
To be clear - the bot detection in XenForo is extremely simplistic. It is solely based on an assumption that bots will adequately identify themselves (and not pretend to be something they're not - eg GoogleBot), which we all know is completely untrue for a large number of bots out there which deliberately hide their identity.
Sure - even with Google. I do have regular visits from bots that are not identified by known bots that come from Google (but don't identify themselves properly).
Bots are being created faster than I can identify them - and I can't do anything about the other bots in any case. Even in your situation with the bot-like-behaviour coming from certain IP addresses, I can guarantee you that the behaviour would likely stop after a short time and move to a different set of IP addresses - so blocking them at that level is likely pointless as a long term solution in any case. Like I said - whack-a-mole.
I do have the unfair advantage that due to the nature of my forum 99,5% of my "good" traffic comes from Germany, Austria and Switzerland. A very tiny bit from the Netherlands, France, UK and the US, the rest of the world is practically irrelevant. Also, I consider basically any traffic that comes from a server or server range as unwanted (apart from a handful of known gateways and proxies and some bots that I consider ok). So I don't have issues blocking IP ranges in China, Russia or anywhere else in the world, even if we are talking about dialup ranges. And even less for server ranges. It doesn't do my forum harm.
How would you know which are ok or questionable? What criteria could you possibly use to determine that?
Exactly how I know now: I look up what the bot does and who runs it and then I block it or not. If I did not make these decisions the known bots list would be completely useless.
How is that in any way a function that should be built into forum software (or any software for that matter)?

Given a large number of forum operators have no idea about how to get email delivered reliably, how effective do you think a forum-based solution for managing bots at a lower level than the user-agent identification would even be?
For those you can handle them: very effective.
You're describing quite sohpisticated solutions which might work ok for you, but which are completely inappropriate for the average forum administrator who wouldn't know a firewall from a load balancer.
I come from a time where the guideline was: If you offer a service on the internet you should know what you are doing and are fully responsible for whatever may happen - so you better have the skills needed. Back then you had either a shared webhosting (with very limited possibilities in every aspect) or a bare metal machine that you had to configure from scratch via shell, which led to mainly people with some skills going the bare metal way and those w/o those skills learned quickly the hard way that this is not a good idea. Times have changed, still I consider it somewhat dangerous to run a service on the internet with absolutely no clue what you are doing.
Limiting the available toolchain to this audience is in my eyes a wrong way. Even in XenForo many things can be done that are way beyond the skillset of the average person running a forum. On the other hand there are enough people running a forum that have the skills and use sophisticated setups including Docker and beyond. I'd consider it completely appopriate to offer tools that need some skills to make use of them. The more if they offer a benefit above tools that a total noob is able to use. It is fair to offer those tools, but saying other tools would be inappropriate is in my eyes an invalid statement.
If only there were a free solution that did all this for you already with a point-and-click user interface that is easy to navigate 🤔
Nothing against a working free solution - but as one can see here in the forums it seems to work rather so-so. I.e. this forum here is behind cloudflare - yet you had this morning ~30 members and ~2.500 guests visiting...

Bildschirm­foto 2025-11-17 um 10.50.51.webp

... and have currently 31 members and 4.400 guests:

Bildschirm­foto 2025-11-17 um 17.30.45.webp

A ratio that sounds somewhat very high and probably indicates that there are a lot of bots around here that are not caught by cloudflare. Depending from when you look it may even be 5.500 guests vs. 3x members, as shown in this post:

 
yeah i do not think xenforo is blocking ai bots here. though i cannot see chatgpt on the list which is weird coz it is just not possible to not have them crawling your site if it is allowed to lol. i did see bytedance which makes me think that they really are not worried about bot traffic. having said that they do block a few bots through robots. amazonbot is basically hammering the site right now for some reason and it appears to be allowed as well except for few links.
 
I like Cloudflare - I've had it from the start, for a number of years now. It seems reliable and easy - set and forget. And you can block an entire country in the firewall. So while my site is "global" there are a couple of countries I block where I know it's almost an unheard of for members to join my site, due to the nature of it, and where a lot of bots and spam come from - so I just block that country.

Cloudflare turnstile is also very good.
 
As in many "free" offers it isn't free: You pay with your data, you limit your own possibilities (and longterm also abilities) and you become dependent from a company that is on the best way to become a monopoly much in the way that Google is one in a different sector. A dangerous route, if you ask me.

Kind of like how you are dependent on XenForo to build the software that runs your site? 🤔

Cloudflare is just a service - if it no longer serves my purpose then I'd simply turn it off. It's not like the forum software intrinisically depends on anything they offer. It could (eg workers), but it doesn't.

You seem to exert a lot of effort defending your choice to not use Cloudflare. Not using Cloudflare is an entirely reasonable decision, especially for someone with the technical skills to be able to get what they need from other approaches.

However, for the vast majority of forum operators (and indeed website operators), Cloudflare is one of the single most useful tools out there. They are also one of the few companies who seem to be consistently doing the right thing by their customers. If that ever changes, then it will be time to find an alternative - but for now, they add huge amounts of value for not much cost (free version will do most of what people need, but the US$25 per month version gives better bot controls).

If you choose not to use Cloudflare - then go in peace and be happy with your choice. But you really don't need to jump into every thread where it is discussed to try and justify why you personally choose not to use it - it becomes a bit tiresome.
 
If you choose not to use Cloudflare - then go in peace and be happy with your choice. But you really don't need to jump into every thread where it is discussed to try and justify why you personally choose not to use it - it becomes a bit tiresome.
I only do that when and b/c I get hinted at using cloudflare by the very same people constantly in those threads, for the most part ignoring every reasoning why someone might not want to use it and the cases where it does not shine (despite many do and consider it helpful) which is as tiresome... ;) I try to avoid it and managed to do so until now
Cloudflare is one of the single most useful tools out there. They are also one of the few companies who seem to be consistently doing the right thing by their customers.
Maybe I lack trust too much - I have seen it too often that great companies offer great things for free and seem to do everything right and fair - until they dont. Typically, once they have achieved being close to a monopoly via their positive behavior. The most famous example is Google/Alphabet. Once they removed "don't be evil" as their basic company rule it was over - but still, years later, everyone depends from them...

Still I might give Cloudflare a try sooner or later. My efforts seem to be successful until now - but they are for one very time consuming (which I cannot afford endlessly and continuously) plus one day my .htaccess will explode or at least slow down the forum. At some point I will ll have to choose either to move to a VPS or dedicated server (which will open loads of new possiblities and solve the .htaccess-problem, but require more administration and maintanaince of the OS which I don't like) or give Cloudflare a try. We will see.
 
Last edited:
My experience was setting up my first forum and server plan - from absolute scratch and knowing nothing. The server plan came with free cloudflare - it seemed a good thing. I've stuck with it. Second forum I had to set it up manually as the server plans no longer automatically set it up for you.
 
Maybe I lack trust too much - I have seen it too often that great companies offer great things for free and seem to do everything right and fair - until they dont. Typically, once they have achieved being close to a monopoly via their positive behavior. The most famous example is Google/Alphabet. Once they removed "don't be evil" as their basic company rule it was over - but still, years later, everyone depends from them...

To pre-judge a company because they might turn evil at some point in the future is kind of like staying single to prevent ever being hurt in a relationship. Kind of missing out on the good stuff for fear of the bad stuff.
 
Kind of missing out on the good stuff
I swear it wasn't me...

Bildschirm­foto 2025-11-18 um 20.56.38.webp

If a service that you add to improve speed and availablity of your website and you even give up the privacy of your users for it in fact stops your website from beeing reachable at all I would not call that "good stuff" but rather suboptimal...
 
Back
Top Bottom