1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Duplicate Stop Bot Spam Via Altering Registration Fields

Discussion in 'Closed Suggestions' started by BamaStangGuy, Apr 5, 2013.

  1. BamaStangGuy

    BamaStangGuy Well-Known Member

    This has basically stopped all bot spam on my forums and would like to see this in default xenForo. It is two part but both deal with altering the registration fields.

    The Form Customisation Mechanism
    • As mentioned above, XRumer and many other bots will try to inject information into forms by using fields names that it knows (name=email, name=password)
    • With the customisation mechanism, each of the valid field names (the fields that a user can see) are now uniquely named, and new names are created for each session.
    • Since the bot will not know which fields names are which (for instance which is the email and which is the password_confirm) it makes it incredibly difficult for the bot to know how to populate the form correctly, once again preventing the bot from registering
    The Form Field Randomisation Mechanism
    • For those bots that do not use fields names, but simply populate the form according to form index order, this is an addition mechanism to trip them up
    • By randomising the field order , it makes it incredibly hard to populate a form according to index number.
    • The fields are randomised every time the registration page is loaded/refreshed
    0ptima and Walter like this.
  2. Gohan

    Gohan New Member

    This would actually be awesome. How could I do this on my own forum?
  3. tenants

    tenants Well-Known Member

    It's already available in FoolBotHoneyPot
    Paid: http://xenforo.com/community/resources/foolbothoneypot-bot-killer-spam-combat.1085/
    Free: http://xenforo.com/community/resources/tac-tenants-anti-spam-collection-anti-spam-free-version.1474/

    The problem with using weak mechanisms (mechanisms that can be bypassed by designing your bot around it) in the core, is that these mechanisms will then have a larger reward for bypassing

    Weak mechanisms can be 100% effective up until the point they are broken
    Strong mechanism (mechanism that can not be easily bypassed) rarely detect 100% of bots.
    So, using both mechanism together has advantages

    As of yet, there isn't a mechanism that will stand the test of time as being a strong mechanism as AI/Spam progresses.
    Some APIs rely on weak mechanism, so are not essentially strong mechanism. But an API is a good example that is close to a strong mechanism

    Other weak mechanism that have had the above history (going into the core, and then being ineffective):

    Originally Q/A was a plug-in for forum software that wont be mentioned
    It was found to be almost 100% bullet proof... a mechanism that stopped 100% of bots
    After putting this mechanism into the core of any large system (such as XF), it then becomes a target where the reward of breaking the system is great
    - The system was broken with a local text file: textcaptcha.txt + a central database of stored answers
    When a bot user finds that they are not getting passed a set of QA questions, they manually update their textcaptha with the answers, this is then shared on a central database so that everyone beats the QA

    ReCaptcha was added to the core of many systems
    The reward for breaking this system became large...
    It was by no means an easy task, since Google update their sets regularly (I believe they will beat bots again soon, and once again be beaten soon after that)
    - The system was broken by training ANNs/OCR against the readily available data sets
    It was only possible to beat such as system since the training sets are readily available. Even with a multibillion cooperation such as Google, if you design your CAPTHCA to be easily trained against, you have made an ineffective mechanisms that needs to be constantly updated. If you use custom images, it's much harder for bot designers to train against your set (since they have no data to train against)

    We now have the Registration Timer in the core of a large system
    The reward for breaking this system is becoming large...
    - The system will be broken with script pausing, 1000 parallel threads will now start 10 seconds later (slowing the entire process down by a total of no more than 10 seconds for an overnight process). The timer mechanism will then be globally ineffective

    StopForumSpam is being introduce into the core. APIs are not essentially weak mechanism, so not a bad move... however:
    The reward for breaking this system is becoming large...
    - Several approaches have been used for this, one it to use black.txt to avoid reporting sites.. and thus remain un-noticed for longer (this wouldn't be an option for spammers when introduced into the core, since they would have to avoid every site???)
    - Secondly, the latest versions of XRumer sends you an email once your IP/Email of your bot is detected, allowing you to change credentials
    - Thirdly, there are some very serious spammers out there that have more than 20,000 proxies (I have been following one personally). These bots are incredibly hard to stop with every single available API, they change their credentials very frequently since they can afford to
    - Fourthly.. there is another way to design XRumer around many APIs, I will not say this publically, since I would rather it takes the bot designers longer to figure this out
    - One of the problems here is that the data is open for everyone to use. This is true for almost every API

    By placing the mechanism in the core, you put the bot designers in a position where it's easy to spot the reward of breaking such a system.
    At $650 for every application they sell, they are quite motivated.

    But... Bot designers are having trouble with the APIs, since these are not essentially weak mechanism (mechanisms that can easily bypassed), but some APIs rely on weak mechanisms

    Bots have always had trouble with custom mechanism, since the reward of breaking these systems is low (by breaking the system, they only pass a handful of sites). By using various custom mechanism from various designers, your approach at stopping spam becomes harder to target

    By relying on core mechanism of a large system, the mechanisms will be broken in waves ... we will see the registration timer broken (I believe the next version of XRrumer, or the one after that, I suspect it will be the boast of XRumer 8), the more mechanism we introduce into core, the more mechanism will be targeted.

    Our only ideal approach is to have many anti-spam designers make sure their approaches are custom

    Putting weak mechanism into the core, will make the mechanism ineffective.
    Putting strong (or close to strong) AntiSpam mechainsm into the core (such as APIs), although these may not always be 100% effective, they are much harder to target.
    AlexT and RoldanLT like this.

Share This Page