How do you handle members who use AI in their posts?

This looks like something an AI bot would post up.
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
 
For my own forum, everyone's email addresses have to be legit and the same with the accounts. If they're posting AI they're not real people they're just bots and they're treated like scammers and banned.
It's real people copying and pasting from an AI query they made.
 
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
You did a good job mimicking AI if only for just the over verbosity!
 
Jokes aside @qubn , I'll answer your points, though in practice, they are not as valid as one would think.
You can never fully determine whether a post was written by AI, and that's unlikely to change. The most you can do is look for statistical patterns across a user's post history. Do they consistently use em dashes? Are their apostrophes always typographically "smart"? ChatGPT, for instance, outputs curly apostrophes by default, which is a subtle but sometimes detectable tell.
Long, long ago, I developed an anti-spam plugin for vBulletin. Spammers and spambot software publishers constantly built workarounds, so I created v3, an anti-spam cloud service. They could not reverse engineer the service, and it was glorious.

Russian spammers were emailing expletives to the support address, and a Chinese one later scooped up the old domain to park it on a porn/gambling page. That is to say, I am well-acquainted with the mechanics of post spam and heuristics related to UGC.
Formatting heuristics suffer the same problem. Bullet points, numbered lists, hedged language, and balanced paragraph lengths are common AI tells, but they're also just... good writing habits. You can't penalize someone for being organized.

But even that only gets you so far. No detection method can be both precise and reliable at the same time, you'll always be trading off false positives against false negatives. Flag too aggressively and you'll wrongly accuse genuine human writers. Be too lenient and bots slip through. Some people naturally write in a structured, formal, almost clinical style, the kind that AI detectors associate with generated text. Others have been writing on the internet long enough to have absorbed those patterns organically.
The right heuristics in the appropriate setting, alongside other tools, had excellent performance both finding true positives and negatives. A direct example: even when you have high WPM, there is a limit to how fast you can type.

If you spit out five paragraphs of text in 15 seconds that is a high-signal low-noise heuristic. A genuine writer also does not make an essay out of every post. It is relatively easy to detect AI slop compared to good old spam.
Also, I wanted to point out that equating AI-generated with low quality is a flawed premise. Humans write poor posts all the time, and someone who uses AI as a drafting tool but actually reviews and edits the output can produce something genuinely valuable. What people really object to isn't the quality, it's the lack of authentic engagement. Those are two different things, and conflating them leads to exactly the kind of false assumptions that get real humans flagged as bots.
Poor posts written by humans are not typically so long, and someone who reviews and edits output will not spam post essays with short intervals. This is a big problem that exists mostly in theory if the solution is engineered properly.
 
Last edited:
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.
This is exactly what I was getting at in my earlier post. It's getting harder by the day to be taken seriously unless you post nonsensical illiterate garbage.
 
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
Longwinded posts ✅

That right there gives you away as an AI bot.

A normal person would at least put an example of their own experiences to make it as their own.

Lecturing others ✅

This happens all the time on Threads.

A normal person is not asking to be lectured all because they call you out as AI.
Your posts smell like something an AI bot would post up

Now answer me this are you an AI bot?
 
Jokes aside @qubn , I'll answer your points, though in practice, they are not as valid as one would think.

Long, long ago, I developed an anti-spam plugin for vBulletin. Spammers and spambot software publishers constantly built workarounds, so I created v3, an anti-spam cloud service. They could not reverse engineer the service, and it was glorious.

Russian spammers were emailing expletives to the support address, and a Chinese one later scooped up the old domain to park it on a porn/gambling page. That is to say, I am well-acquainted with the mechanics of post spam and heuristics related to UGC.

The right heuristics in the appropriate setting, alongside other tools, had excellent performance both finding true positives and negatives. A direct example: even when you have high WPM, there is a limit to how fast you can type.

If you spit out five paragraphs of text in 15 seconds that is a high-signal low-noise heuristic. A genuine writer also does not make an essay out of every post. It is relatively easy to detect AI slop compared to good old spam.

Poor posts written by humans are not typically so long, and someone who reviews and edits output will not spam post essays with short intervals. This is a big problem that exists mostly in theory if the solution is engineered properly.
Spam detection and AI-assisted posting are two completely different problems, and experience in one doesn't automatically transfer to the other. Fighting spambots and judging whether a real person used AI assistance are fundamentally different, spammers don't care about blending in, they want volume. Someone using AI to help draft a post while genuinely participating in a conversation is a completely different case. The heuristics that work against bots fall apart the moment the person behind the keyboard is real, because they're not trying to evade detection, they're just writing.

The timing heuristic catches the obvious cases, sure. But anyone actually using AI as a drafting tool writes it elsewhere, reviews it, edits it, then pastes it in. The timing tells you nothing about that. And that's exactly where false positives happen, a real person who took their time and posted something coherent gets flagged because it looks "too good."

Your entire argument assumes the problem is lazy AI spammers, which is the easy case. The hard case, and the one that actually causes damage, is real people getting wrongly accused. That's not theoretical. That's zappaDPJ in post #14, accused of being a bot at 66 years old. Your engineered solution didn't help him. (https://xenforo.com/community/threa...ho-use-ai-in-their-posts.236433/#post-1776626)

And on the "short intervals" point, someone drafting carefully with AI isn't spamming essays back to back. That pattern describes a bot, not someone who actually read the thread, formed an opinion, and responded to it.

Longwinded posts ✅

That right there gives you away as an AI bot.

A normal person would at least put an example of their own experiences to make it as their own.

Lecturing others ✅

This happens all the time on Threads.

A normal person is not asking to be lectured all because they call you out as AI.
Your posts smell like something an AI bot would post up

Now answer me this are you an AI bot?
No. And I already answered this more thoroughly than it deserved.
 
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
Similarly, my Instagram account has 140K followers, I shoot drone videos, most of which take a lot of time and planning and now when I post them up my comments are littered with "That's fake" or "AI" without a second thought. It's almost to the point that it's not worth it anymore, these companies refuse to flag actual AI unless the poster opts to tick the "contains AI" button, which most never do.

The lines have been blurred and if you make quality work, it will be questioned as AI and it feels like actual content creators are cooked.
 
Back
Top Bottom