[TAC] Stop Human Spam

[TAC] Stop Human Spam [Paid] 1.4.8

No permission to buy ($19.00)
Hey @tenants - any way to turn like post ratio off?

I tried putting it to 0 and it is failing even though it should work:
http://imgur.com/myATB9J

I think it might also just be a bug.

Just tried it, it works with a 0 like ratio. can you take a screen shot of the options for this plugin. There are 2 areas, one for the signatures and one for the post. Even I got confused when I looked at it, 'm suspecting it might be a setting issue... but need more info to dig deeper
 
Great news @tenants and I'd love to see this developed for XF2 as well - it's worked fantastically on all of the sites I've run over the years and still catches bunches of human spammers every day at CycleChat. (y)
 
A nice addition to this would be to include the users IP address in the logs, so guests attempting to spam, and thus caught, yet who keep attempting to spam as a guest, can be blocked by IP to remove their nuisance totally.
 
The usernane in the logs is a link, this link will direct you to the users profile in the admin area, from there you can do obtains the ip, do looks ups etc

upload_2017-1-25_10-2-27.webp

You could then block by IP from there, or maybe you want me to include a link "block by IP" just to make this a little easier to do it from StopHumanSpam

Alternatively, this could be automatic, like an auto IP cache similar to what is done for DeDos and FoolBotHoneyPot

If a user keeps on attempting, they are automatically blocked for a certain period of time by IP address
Hmmm, I'm not sure about doing automation for this, DeDos and FoolBotHoneyPot are created for bots, StopHumanSpam is for humans (Dedos should also catch the humans that are using semiautomation tools for sending messages, so these sorts of nuisances should already be removed)

I haven't really found human spammers to causes this kind of issue, but nuisances should be easier to remove. I'm just not sure if it's a good idea to automatically do this

I think you were just referring to a link, which would be easy enough, and avoids any unnecessary automation against humans
 

Attachments

  • upload_2017-1-25_9-59-46.webp
    upload_2017-1-25_9-59-46.webp
    65.8 KB · Views: 2
For guests, this doesn't work. There is no user account to log IP's... hence the guest IP spam issue that keeps recurring. I agree about this being human spam, not automated... but automated bots trigger this, and thus finding their IP would be good.

Guests have no control beyond looking at a log and doing nothing about them. See the issue?

I don't use all the other tools, as some caused massive issues with current XF releases, so I removed them all except this, which still seems to work well.
 
Last edited:
I see, I never see bots due to the ip caching of dedos and fbhp, so I never come across this.
I've updated the other spam addons so they will work with the latest xenforo here

But I see your point about guests

A block this IP address link, displaying the IP address would be a good idea and should be easy enough, I believe the IP address is already logged, I just need to add a link to the core functionality
http://www.example.com/admin.php?banning/ips-add&ip=xxx.yyy.zzz.aaa
I'll look into it
 
okay, I wasn't logging IP, so a couple of db changes, and a bit of testing for me to do, I should be able to update this with "block this ip address" functionality.
I'll add it to the logs, when you click a particular log id
 
@Anthony Parsons do you often get IPV6 bots, it might be a bit difficult for me to test fully since I'll never see real bots anymore. I'm too armed to the teeth with anti-spam, spam bots don't get a chance to do anything that uses resources on my sites.

I've tested it on my test environment but have no IPV6 ips available to me, fancy giving it a test for me and letting me know if the ipv6 ip's work too (should do)?

I don't particular like the cores idea of banning IPs, I only cache them for 48 hrs with dedos and fbhp, since it doesn't truly identify a user long term, and you could end up with an exhaustive list, but it is a simple solution to your issue

v1.3.4 ready when ever you're ready to test it (I can send via email, pm me your email address)

now added a simple link that navigates to the core banning:

upload_2017-1-25_20-12-51.webp
 
Last edited:
I'm too armed to the teeth with anti-spam, spam bots don't get a chance to do anything that uses resources on my sites
It isn't about spambots.... its about human spammers. They hit guest posting. Cheap off-shore spammers... and whilst this still gets 99% of human spammers, it still misses the occasional one.

fancy giving it a test for me and letting me know if the ipv6 ip's work too (should do)?
Yep... absolutely. I have IPv4 and IPv6 content that gets caught in this, as I have new members using IPv6 who often try and post links to something or other, that get caught.

I don't particular like the cores idea of banning IPs, I only cache them for 48 hrs with dedos and fbhp, since it doesn't truly identify a user long term, and you could end up with an exhaustive list, but it is a simple solution to your issue
I don't really want to ban them, I just want the IP. Right now, guest posts won't show the posters IP... I want the IP so I can manually check whether the host is a known spammer and block the entire block, usually.

I don't typically block single IP's, as its a complete and utter waste of time. But if the IP isn't there for guest post spam, I can't do much of anything other than keep watching it happen.

See my point?

I would honestly prefer the link open the normal IP lookup, than ban, but thats your call.
 
The IP is displayed as above, the link is there if you want to ban the IP address but obviously optional, send me your email in PM and I'll send you a version you can test
 
@Anthony Parsons

Just from the conversations via email, have you tried dedos at all?

It comes with an options for stopping suspicious hosts, I think I've prepoulated it with quite a few, but these are some of them picked up from other methods that dedos uses

colocrossing, .contina.com, quadranet.com, .tor-, -tor., .tor., tor-exit, torexit, torproxy, tor.proxy, tor-proxy,tor.exit, torserver, tor.het, .geoca.st, colocall.net, tuthost.ua, azure., privacyfoundation, ip-pool.com, vpnsvc.com, nullbyte.me, heroku.com, .sevpn.com, .alexhost.md, .SteepHost.Net, host1dns.com, serverdale.net, globaltap.com, heilink.com, vultr.com, dataclub.biz, s51430.net, cloudatcost.com, masharikihost.com, scalabledns.com, novalayer.net, unmetered.com, voxility.net, netbynet.ru, corbina.ru, cpx.ru, ertelecom.ru, elcom.ru, comcor-tv.ru, .mts.ru, a4321.ru, .sat-dv.ru, qwerty.ru, maxnet.ua, .com.ua, .net.ua, nephax.eu,poneytelecom.eu, triolan.net, .eonix.net, trendmicro.com, .mysipl.com, enjoy.ne.jp, .digicube.fr, .contina.com, .ztomy.com, .krypt.com, embarqhsd.net, chinamobile.com, fastwebserver.de, .cantv.net, gemwallet.biz, .net.il, .totbb.net, ziggo.nl, .163data.com.cn, .enn.lu, kyivstar.net, turkrdns.com, .chello.pl, tpnet.pl, 1113460302.com, .2015.com, .com.cn, .vdc.vn, .hinet.net, .ukrtel.net, .ccc.de, .hispeed.ch, servers.com, dsbfthp.com, piraten-nds.de, snowylinker.com, pinspb.ru, .micfo.com, noisetor.net, .com.pl, .nos-oignons.net, .icom.lv, webenlet.hu, .fsa-bg.org, .kundencontroller.de, kyivstar.net, telecet.ru, ufanet.ru, guilhem.org


most of them are hosting sites that I use myself for automating (but white hat automating for my own site).. If a user is navigating from a server, for instance Amazon Web Services, chances are, they are using automated methods, they're defiantly not the typical non bot user

upload_2017-1-26_23-52-29.webp



And they are quite easy to see in the logs, for instance, I block some hosts that contain certain words (such as tor),I sniff out other hosts using the magic ingredient 1
There is the option to update the htacess file, so certain IP's are blocked for 48 hours (the ones that are scrapping / hitting the site over an over)

logs example:

upload_2017-1-26_23-57-3.webp
 
You can use dedos just to block the hosts, but it's really wasted if you dont turn on secret ingredient 1, that picks up so much and is highly effective with no false positives, it simply detects whats a browser and what is not

You don't have to automatically update the htaccess, it's an option

- but it makes it easy to do what you mentioned, block certain hosts, and the logs make it incredibly easy to spot the biggest offenders
 
By the way

It isn't about spambots.... its about human spammers. They hit guest posting. Cheap off-shore spammers... and whilst this still gets 99% of human spammers, it still misses the occasional one.

Using colohosting (where they're using the server) and such, that's not typical of human but is typical of an automation environment, where you only want to turn it on to run your scripts and pay for a short time that you use the server. What hosts are you talking about? Was it any of the ones mentioned above? These were all picked up by dedos and all used by botters.

Getting past the captcha does not define a human, especially the commonly used captcha (google / etc). I'm interested to know what hosts you're referring to, once you use dedos, you won't believe the number of bots around, a 2017 article in the newspaper approximated that now over 60% of internet traffic is used by black hat bots, for forms it's going to be even more targeted! Humans only make up 38.5% of the traffic. Bot traffic is up by 21% compared to 2012, the trend is looking like an incrediblly botty future!

So, I always ask, are you sure it wasn't a bot, because more often than not, it is... unless it's one of my sites, then, bots are not very likely (unless they are my own)

http://www.dailymail.co.uk/news/art...ibute-malware-steal-data-swipe-passwords.html
 
Last edited:
Not a bot, as bots can't get passed Googles new recapture. I know this, because I used automated tools to try and get by it, and the best methods are usually sending the recaptcha to a human for verification for their new version. Very low success rate for actual solving recaptcha now automatically.

So yes, its human spam.

It is coming from Netherlands. I have used the dedos one before, but I use cloudflare now and block everything from all my sites, all servers, one go, with their firewall. Makes it easier for longevity.

It was getting the IP though of the guests that was the problem.
 
There are many types of bots, many browser based bots (I use my self as in combination with Selenium ANT and the Java robot framework) will bypass googles no-capacha recapatch. Brower based bots are not uncommon and can be used for posting on forums. I can probably give you a demonstration of one in action if you're willing to log into one of my aws severs (I use them mainly for tweeting and doing a bit of white hat automation of my own forums)

edit, no need here's some vidoes of brower based bots bypassing recaptcha nocaptcha
To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
like all antispam mechanisms, botters will keep breaking it, and google will keep fixing it. It's an arms race.


Xrumer has been fairly unsuccessful with no-captach recaptcha, but they boast they're looking at no-capatcha recaptcha
See: http://www.botmasterlabs.net/event/2016-08-10/1/


However their recent update targets honeypots, as I had predicted it would (I'm pretty sure this update was significantly due to the xf core adopting honeypots, which I warned many times)
http://www.botmasterlabs.net/event/2017-01-03/1/

Xrumer is probably the number 1 contributor to forum spam, so recaptach works against it for now
But all captchas work, and then they break, this has happened time an time again. The most widely adopted are the most widely targeted!

If you relly on a core methods, or a largely used method that many other people use, you are susceptible to floods of spam
These are exactly the types of methods that get targeted, broken and lead to floods of spam getting in.

Customise customise customise, use many anti spam methods, and try to use ones that do not bother your users

APIs are pretty good, these don't usually get bypassed with floods of spam, but they aren't 100%, they let in a small fraction. XenForo has more than enough Antispam apis
There is a way around them, and something I see as the future of antispam. However, I don't think the developers of Xrumer are that far sighted and thankfully seem quite slow at updates.

I suspect ReCaptcha will be within Xrumer within the next 6 months - 1 year

Re-iterating "Bypassing captchas does not define a human", I can demonstrate a bot in action that does this even for no-captcha recaptcha
 
Last edited:
The limited spam I see in the log is not bots though... it is human. I read the logs and do understand most of the different browser hacks and such used to disguise oneself. I get very limited logs... with 20k a day hitting them, I'm lucky to see 5 to 10 logged. Not bots, purely human. I have enough tools in play to stop all the automated nonsense and I don't get automated spam. Haven't in years... partly some of your add-ons, then migrated to more optimal solutions at DNS. Shifting to DNS stopped it all before it got to my server.

Honestly, this single add-on of yours alone is all I use now, and it is invaluable. It seriously catches 99.99% of everything from those hard core dedicated spammers who give eyes on spamming a go... and they all quickly give up. It also catches those who fans who go through the register process, attempt to post 5, 10 or 20 short posts, then get tired and try their spam... caught with this.

I have a high threshold set before any member can post links... and even then, I use your ratio component to ensure what they post is quality and being liked more than post content itself. That ratio is a slam dunk nicety. :)
 
I love this add-on! Now I can finally disable spam rules in conversations (because they can't be moderated and I have http, https and www in my spam words filter)
It was really annoying for people but it was necessary for posts.

But now... Another problem.
Is there a way to stop people from posting URLs or hidden URLs in custom profile fields?
Is it possible to extend it to the custom profile fields?
 
There seems to be a wrong phrase in
Link-Post-Rules Message causing the Likes output to show the min. likes.

You must have received at least {min_likes} Likes (Yours: {min_likes})<br/>

I have min. likes set to 3 and have no likes and the output is 3.

Not sure what the right one would be...
 
Top Bottom