StopBotResources - Stop Spam Bots From Hogging CPU and Bandwidth [Paid] [Deleted]

tenants

Well-known member
tenants submitted a new resource:

StopBotResources - Stop Spam Bots From Hogging CPU and Bandwidth (version 1.0.2) - Stop Spam Bots From Hogging CPU and Bandwidth

Stops Spam Bots from taking up a large amount of CPU and Bandwidth resources

This plugin is particularly useful for small sites, on shared servers, with a high ratio of bots:humans (a significant impact will result).
For sites that have a large amount of humans:bots, or have a large amount of resources, this plugin is not necessary (may not have a significant impact).

Personally, for one site that I had, it was essential. I had only converted it over to XenForo and within 4 days,...

Read more about this resource...
 
I've no doubt it saves bandwidth when you block out a huge amount of visitors. ;)
The more you block out, the more bandwidth you'll save.

But blocking the whole site for someone just because the IP is known to have been used by someone who has been qualified (by whom?) to be a spammer?

What happens if the Google Bot had been qualified as spammer? You loose all your Google placements. or is there a whitelist for "good" bots?
 
It can't... Google bot doesn't register on forums and doesn't get detected as a spammer...

The proxies that this blocks are only proxies used by SPAM bots (not search engine spiders, or human users)

And the proxies are only blocked if they have been actively used by spam bots within the last few weeks
 
What happens if the Google Bot had been qualified as spammer? You loose all your Google placements. or is there a whitelist for "good" bots?

I'll white list them to be on the safe side, just in case there is ever any doubt/confusion (next version)
 
The posted graphs in fact scares me from using it.

There is such a huge loss in visitors after installing this add-on, can it really be that 50%+ of the traffic is non-human?

Maybe you shouldn't return a 404 error to blocked traffic, but a link to a form where people can complain if they get trapped? Of course that form should record and show you the visitors IP.

And last, but not least the source of your filter database should be mentioned. There are more and there are less reliable resources for bots and proxy ip addresses. None of them are 100% reliable, so blocking all access to your website because of an entry in such a database won't be for everyone, for sure.
 
The posted graphs in fact scares me from using it.
There is such a huge loss in visitors after installing this add-on, can it really be that 50%+ of the traffic is non-human?

Most forums wont see such an impact, only the forums that have such a high ratio of bots:humans. It's not necessary to install this on most forums (since they will not see a significant impact), that graph is an example of it being installed on a forum where it was desperately needed.

Have you seen ai-stockmarketforum ? It's pretty much a bot haven, it was an old forum hardly used by humans at all (I suspect 50% spam bots, 49% Search Engine Bots, 1% people who some how lost their way)

Maybe you shouldn't return a 404 error to blocked traffic, but a link to a form where people can complain if they get trapped? Of course that form should record and show you the visitors IP.

The error message, header and error type returned can be customised by you... it doesn't have to be a 404 (so you could add a message to contact you via email.. if you wanted)

And last, but not least the source of your filter database should be mentioned. There are more and there are less reliable resources for bots and proxy ip addresses. None of them are 100% reliable, so blocking all access to your website because of an entry in such a database won't be for everyone, for sure.

The database used is StopProxies, this is similar to StopBotters but just IP addresses. These bots have been detected by registering on XenForo forums and triggering many combinations of traps (the mechanism goes out of it's way to avoid false positives), some of the mechanisms can't be disclosed, but they too avoid false positives. So far, it detects about 95% of spam bots, so can reduce the resources used by spam bots by ~ 95% (slightly less, since we have to do a look up and return something)
 
Hi
I wonder how exactly it checks for "proxies that have been used by spammers" ?

Read:
The database used is StopProxies, this is similar to StopBotters but just IP addresses. These bots have been detected by registering on XenForo forums and triggering many combinations of traps (the mechanism goes out of it's way to avoid false positives), some of the mechanisms can't be disclosed, but they too avoid false positives. So far, it detects about 95% of spam bots, so can reduce the resources used by spam bots by ~ 95% (slightly less, since we have to do a look up and return something)
 
The posted graphs in fact scares me from using it.
It's working very well on the graph on the front page, because it's really required (yes, the majority of users were spam bots)

I've also added it to SurreyForum... with SurreyForum it's not really needed at all (less than 100 bots a day), but you can still see the impact since there arent many users (since not many bots attempt to spam surrey forum, it only saves about 50 meg a day)


SF.webp
 
It all depends on how trustworthy the database is you use for your filtering.

And -at the moment- the web site of www.stopbotters.com does not show anything you can base trust on. Just words, no data and no recommendations or similar proof. Not even a statement about how the data is generated, what will be done with the data, how it is proofed and quality rechecked to be sure it is 100% qualified for blocking visitors.

Another example: There is a stopbadware initiative powered by large internet corporations and if you look at their Top 50 IPs and ASs you find many well known names.
http://www.stopbadware.org/top-50
I don't know if I want to block any traffic even based at their database...
 
With all due respect, there's plenty of assurance that StopBotters is trustworthy.

It's integrated into a number of add-ons here, including my Reg Form Timer add-on.

StopBotters integration is in version 2.0 which has been downloaded 70 times. If we're conservative, let's say it was actually installed on 50 of those forums.

http://xenforo.com/community/resour...mer-now-includes-stopbotters-api.1248/history

That's 50 forums and as tenants says:
The database used is StopProxies, this is similar to StopBotters but just IP addresses. These bots have been detected by registering on XenForo forums and triggering many combinations of traps (the mechanism goes out of it's way to avoid false positives), some of the mechanisms can't be disclosed, but they too avoid false positives. So far, it detects about 95% of spam bots, so can reduce the resources used by spam bots by ~ 95% (slightly less, since we have to do a look up and return something)
Think about this a moment. Whatever the mechanisms are, if they're being used on 50 forums that's a lot of data it's getting.

I am using StopBotters (via FoolBotHoneyPot) and it is stopping probably even more than 95% of spammers now. I'm sure tenants will ratify, there has been no reported issues of false positives either by my add-on, FBHP or any other. The same can't be said for the current popular ones such as Stop Forum Spam etc.

With all this in mind I trust StopBotters more than any other service.
 
We are using FoolHoneyBot without StopBotters (we are legally not allowed to forward visitor information to unidentifiable third party services, there is not even an "About Us" or any other owner information at stopbotters.com) and it is blocking 100% of SPAM registrations. It is a very good piece of software. Totally recommendable.

It is one thing to use databases for blocking forum registrations. But everybody should think twice about using the same database for blocking all access to the web site totally. There could be serious negative side effects, you should warn about.

Nevertheless: Good luck with your StopBotters initiative. It shouldn't be that hidden when used in Add-Ons.
 
We are using FoolHoneyBot without StopBotters (we are legally not allowed to forward visitor information to unidentifiable third party services, there is not even an "About Us" or any other owner information at stopbotters.com) and it is blocking 100% of SPAM registrations. It is a very good piece of software. Totally recommendable.

Would that apply to a bot that has been identified by filling a honey pot? Surely such identified "visitor" information is excluded from such legal obligations or privacy protection ideals/rules?

The mechanism of FBHP is such that genuine human visitors never fill the pots so surely this is the one add-on where you could pass on data to a third-party service - since all the data will be non-genuine?

Cheers,
Shaun :D
 
Would that apply to a bot that has been identified by filling a honey pot? Sure such identified "visitor" information is excluded from such legal obligations or privacy protection ideals/rules?

The mechanism of FBHP is such that genuine human visitors never fill the pots so surely this is the one add-on where you could pass on data to a third-party service - since all the data will be non-genuine?

I don't like to take the role of "devils advocate" here. Sorry for that. :)

IP addresses once used by spammers or even bots can be used by legit visitors just a few minutes later. In our mobile world IP addresses are often used by large pools of computers. You simply cannot say that a certain IP address is "bad" because it was used one time e.g. by a script kiddy trying out a new script found in the internet for surfing forums automatically. The lack of ip4 addresses makes that even worse. All mobile service providers use proxies for mobile internet services. If one of their users tries such a script you may block thousands of legit visitors too.

There may be a mechanism in the StopBotters service that takes care about all that. Excluding mobile and dynamic ip nets for example, excluding certain ASes (like Googles) or even just store the ip addresses for only some minutes. Best idea would be to store just the md5 hash done over the whole client identification headers of a "bad boy" and store that for just one hour.

But since the providers of StopBotters are A) hidden and B) won't disclose how their service works, you simply have to put in a lot of trust to use that add-on. And even then it will block many legit visitors too for sure. :)
 
And even then it will block many legit visitors too for sure.
It's certainly not for sure, but I do appreciated your concerns... don't use it if you're worried. Have you found that FBHP has accidently detected a human as a bot? StopBotters goes a little bit further at making sure this never happens (multiple bot triggers have to occur before StopBotters/StopProxies ever records a SpamBot). I haven't had a case where StopBotters has detected a bot, but somehow the mechanism of FBHP didn't, and I expect I never will (unless FBPH is bypassed)

You are right about IP addresses re-entering the pool, but as mentioned, all the IP addresses that are prevented are ones that have been detected as active Spam Bots within the last few weeks

The concern about script kiddies banning an IP address isn't very valid. The techniques used to detect bots are very optimised for detecting XRumer (This is what hits and spams most forums a very high % of time).

XRumer is quite a hefty resource
People usually run it from dedicated servers
It's quite costly (~$600 last time I checked)
Most XRumer users use a dedicated box as a proxy server, or buy a list of proxies (some times they use free proxies, but find they are already banned on most forums)... These proxies are usually paid for a month (some sites offer daily proxies, but they are kept by the proxy owner for a lot longer). They wont be using their own IP addresses. So these IP addresses are out of the domain of your normal users for a while. But, eventually these proxies will go back into the pool, 3 weeks is a reasonable time for forum users using variable IP addresses not to get accidently detected as a bot (since even if a proxy is detected by a spammer, the spammer / proxy provider will have ownership of that proxy usually for at least a month)

If the spammer is running a basic script via a browser, FBPH goes out of its way to not detect it as a bot (Real people do use browser plugins to auto complete forms or they use browser plugins for password managers). Users that are using browsers are very unlikely to ever be detected as a bot, regardless of what browser plugin they are using. A combination of events has to be triggered to be detected as a spam bot, and this is only done with an outside application (such as XRumer). People running scripts via browsers wont be detected as spam bots... and to be honest, I don't really care about them (they are not firing 1000 requests every few seconds at hundreds of thousands of forums overnight), in fact, compared to XRumer users, they are very rare.

StopBotters / StopProxies does avoid human errors being introduce by humans falsely reporting IP addresses (maliciously or accidently) as a bot. I've seen this over and over on most anti-spam API's, the reason for this is the introduction of human reporting, or incorrect use of their reporting mechanism. That can not happen with StopBotters/StopProxies, nobody has access to use it or manually report forum users to it

It's also a closed network, nobody can query that database or use the database apart from XenForo forums. The reason for this is that it has been noticed that the effectiveness of many API's has started to dwindle and will continue to slip as more XRumer users use the open networks to know when to change their settings, or use things such as Xblack.txt, this allows the spam bots to go unnoticed for much longer.

By removing the information from the bot users, making the methods unobtainable and not making the data public we put XenForo users that use StopBotters in a much stronger position

But.. if you can not send user data to a third party, then obviously no API can be used in your case (all API's need some information to be able to detect if the user is a bot/not). The lack of information on the StopBotters/StopProxies site will possibly stay that way for a while, so if this is something you can't live with for your forum users, then don't use it.

But.. as a final note, this mod (StopBotResources) is not for all forums. It's to be used for small forums (usually on shared hosts) that are finding spam bots are taking up a considerable amount of resources.... I myself was in this situation. It is a life saver for those forums that just cant continue due to the amount of resources spam takes up (I know of sites that have closed down just due to spam attempting to hit them, even if the spam wasn't successful). It doesn't sound like you are in that situation, so it's unlikely this plugin will benefit you considerably

I don't like to take the role of "devils advocate" here
Don't worry, I can see where you are coming from, but I know the other side... and everything is being done to avoid false negatives... it's designed with this in mind. It's currently very easy to detect bots, but harder to make sure you never accidently detect a human as a bot (this is something I keep repeating in PM's ... so, making sure false negatives are not found is priority)
 
Thank you very much for the explanation what you're doing.
Now people know and can choose. ;)
 
tenants updated StopBotResources - Stop Spam Bots From Hogging CPU and Bandwidth with a new update entry:

StopBotResources - v1.0.3 minor upgrade

  • Option added to allow Search Engine Spiders to bypass the StopProxies check to find out if it is a spam bot. This is options is not necessary, since search Engine Spiders will never be detected as spam bots. This option has been added to remove any fear that search engine bots will not be able to spider your site. The core XenForo functionality has been used to check if the user is a search engine robot (this checks the user_agent, which can be faked by bots)

Read the rest of this update entry...
 
If you have been part of the testing for this resource, and want the updates, let me know your username on SurreyForum and I'll upgrade your account (so that you can have free updates for this resource)...
Doing it this way makes it much easier for me, rather than sending out new versions by emails ;)
 
Having tested this on all 8 sites on my server I didn't really notice any sizable difference (possibly due to a low ratio of bots to natural traffic):

server-cpu.webp
server-ethernet.webp
server-throughput.webp

I'm lucky to have a dedicated server that can cope with a lot of traffic and uncapped bandwidth from my host (within the practical limits of my server's gigabit NIC), so this isn't something I'm really going to benefit from, however I'd expect this add-on to still be very useful to those with a high volume of bot visits or who's server/bandwidth resources are limited (or where you are charged for bandwidth overages).

Cheers,
Shaun :D
 
I too installed this plugin on my site but have yet to see any difference in bandwidth at all. At the moment I'm using XenUtiles spam management which finds all bots.
 
Top Bottom