How do you handle members who use AI in their posts?

This looks like something an AI bot would post up.
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
 
For my own forum, everyone's email addresses have to be legit and the same with the accounts. If they're posting AI they're not real people they're just bots and they're treated like scammers and banned.
It's real people copying and pasting from an AI query they made.
 
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
You did a good job mimicking AI if only for just the over verbosity!
 
Jokes aside @qubn , I'll answer your points, though in practice, they are not as valid as one would think.
You can never fully determine whether a post was written by AI, and that's unlikely to change. The most you can do is look for statistical patterns across a user's post history. Do they consistently use em dashes? Are their apostrophes always typographically "smart"? ChatGPT, for instance, outputs curly apostrophes by default, which is a subtle but sometimes detectable tell.
Long, long ago, I developed an anti-spam plugin for vBulletin. Spammers and spambot software publishers constantly built workarounds, so I created v3, an anti-spam cloud service. They could not reverse engineer the service, and it was glorious.

Russian spammers were emailing expletives to the support address, and a Chinese one later scooped up the old domain to park it on a porn/gambling page. That is to say, I am well-acquainted with the mechanics of post spam and heuristics related to UGC.
Formatting heuristics suffer the same problem. Bullet points, numbered lists, hedged language, and balanced paragraph lengths are common AI tells, but they're also just... good writing habits. You can't penalize someone for being organized.

But even that only gets you so far. No detection method can be both precise and reliable at the same time, you'll always be trading off false positives against false negatives. Flag too aggressively and you'll wrongly accuse genuine human writers. Be too lenient and bots slip through. Some people naturally write in a structured, formal, almost clinical style, the kind that AI detectors associate with generated text. Others have been writing on the internet long enough to have absorbed those patterns organically.
The right heuristics in the appropriate setting, alongside other tools, had excellent performance both finding true positives and negatives. A direct example: even when you have high WPM, there is a limit to how fast you can type.

If you spit out five paragraphs of text in 15 seconds that is a high-signal low-noise heuristic. A genuine writer also does not make an essay out of every post. It is relatively easy to detect AI slop compared to good old spam.
Also, I wanted to point out that equating AI-generated with low quality is a flawed premise. Humans write poor posts all the time, and someone who uses AI as a drafting tool but actually reviews and edits the output can produce something genuinely valuable. What people really object to isn't the quality, it's the lack of authentic engagement. Those are two different things, and conflating them leads to exactly the kind of false assumptions that get real humans flagged as bots.
Poor posts written by humans are not typically so long, and someone who reviews and edits output will not spam post essays with short intervals. This is a big problem that exists mostly in theory if the solution is engineered properly.
 
Last edited:
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.
This is exactly what I was getting at in my earlier post. It's getting harder by the day to be taken seriously unless you post nonsensical illiterate garbage.
 
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
Longwinded posts ✅

That right there gives you away as an AI bot.

A normal person would at least put an example of their own experiences to make it as their own.

Lecturing others ✅

This happens all the time on Threads.

A normal person is not asking to be lectured all because they call you out as AI.
Your posts smell like something an AI bot would post up

Now answer me this are you an AI bot?
 
Jokes aside @qubn , I'll answer your points, though in practice, they are not as valid as one would think.

Long, long ago, I developed an anti-spam plugin for vBulletin. Spammers and spambot software publishers constantly built workarounds, so I created v3, an anti-spam cloud service. They could not reverse engineer the service, and it was glorious.

Russian spammers were emailing expletives to the support address, and a Chinese one later scooped up the old domain to park it on a porn/gambling page. That is to say, I am well-acquainted with the mechanics of post spam and heuristics related to UGC.

The right heuristics in the appropriate setting, alongside other tools, had excellent performance both finding true positives and negatives. A direct example: even when you have high WPM, there is a limit to how fast you can type.

If you spit out five paragraphs of text in 15 seconds that is a high-signal low-noise heuristic. A genuine writer also does not make an essay out of every post. It is relatively easy to detect AI slop compared to good old spam.

Poor posts written by humans are not typically so long, and someone who reviews and edits output will not spam post essays with short intervals. This is a big problem that exists mostly in theory if the solution is engineered properly.
Spam detection and AI-assisted posting are two completely different problems, and experience in one doesn't automatically transfer to the other. Fighting spambots and judging whether a real person used AI assistance are fundamentally different, spammers don't care about blending in, they want volume. Someone using AI to help draft a post while genuinely participating in a conversation is a completely different case. The heuristics that work against bots fall apart the moment the person behind the keyboard is real, because they're not trying to evade detection, they're just writing.

The timing heuristic catches the obvious cases, sure. But anyone actually using AI as a drafting tool writes it elsewhere, reviews it, edits it, then pastes it in. The timing tells you nothing about that. And that's exactly where false positives happen, a real person who took their time and posted something coherent gets flagged because it looks "too good."

Your entire argument assumes the problem is lazy AI spammers, which is the easy case. The hard case, and the one that actually causes damage, is real people getting wrongly accused. That's not theoretical. That's zappaDPJ in post #14, accused of being a bot at 66 years old. Your engineered solution didn't help him. (https://xenforo.com/community/threa...ho-use-ai-in-their-posts.236433/#post-1776626)

And on the "short intervals" point, someone drafting carefully with AI isn't spamming essays back to back. That pattern describes a bot, not someone who actually read the thread, formed an opinion, and responded to it.

Longwinded posts ✅

That right there gives you away as an AI bot.

A normal person would at least put an example of their own experiences to make it as their own.

Lecturing others ✅

This happens all the time on Threads.

A normal person is not asking to be lectured all because they call you out as AI.
Your posts smell like something an AI bot would post up

Now answer me this are you an AI bot?
No. And I already answered this more thoroughly than it deserved.
 
Congratulations, you just became a live demonstration of everything I wrote. You read a structured, well-reasoned post and your only response was "looks like AI", no counterargument, no specific critique, just a pattern-matching reflex. That's not a rebuttal, that's the exact false positive problem I described playing out in real time, in the same thread, aimed at the person who wrote the post warning you about it. The irony is genuinely impressive.

You didn't challenge a single point. You didn't identify anything factually wrong. You just had a vibe and ran with it. That's not moderation judgment, that's bias with a confidence problem.
Similarly, my Instagram account has 140K followers, I shoot drone videos, most of which take a lot of time and planning and now when I post them up my comments are littered with "That's fake" or "AI" without a second thought. It's almost to the point that it's not worth it anymore, these companies refuse to flag actual AI unless the poster opts to tick the "contains AI" button, which most never do.

The lines have been blurred and if you make quality work, it will be questioned as AI and it feels like actual content creators are cooked.
 
We've put rules in the forums I administer--strictly, no AI content whatsoever. If you want to discuss AI (such as, its many detriments to society, its misinformation, etc.), do it in the off-topic area, out of view of search engine spiders. We had too many members pasting in replies from LLM agents, and have even had fake AI "performances" used to start threads with.

By allowing members to post AI slop, the misinformation they all spew out as answers is not something we should be allowing search engines to index. All we are doing is perpetuating that misinformation by providing it as a source for future LLM training, which in turn makes the misinformation even more "true" in the algorithms LLMs operate on.

Nobody can convince me AI does a single thing helpful. I have written too much about how AI and LLMs are already ruining many aspects of our society. I'm done beating that dead horse.

So TL;DR. No AI content on any of our sites. Ever. Do it and you'll be suspended.
 
Spam detection and AI-assisted posting are two completely different problems, and experience in one doesn't automatically transfer to the other. Fighting spambots and judging whether a real person used AI assistance are fundamentally different, spammers don't care about blending in, they want volume. Someone using AI to help draft a post while genuinely participating in a conversation is a completely different case.
Spam and AI-assisted posting are not mutually exclusive. The latter can produce confident-sounding, linguistically-coherent posts that mean nothing in reality. The quoted post is a prime example.
The heuristics that work against bots fall apart the moment the person behind the keyboard is real, because they're not trying to evade detection, they're just writing.

The timing heuristic catches the obvious cases, sure. But anyone actually using AI as a drafting tool writes it elsewhere, reviews it, edits it, then pastes it in. The timing tells you nothing about that. And that's exactly where false positives happen, a real person who took their time and posted something coherent gets flagged because it looks "too good."
To reply to a post, you have to 1. open the thread, 2. read the post, and then 3. write the reply, regardless of how it is composed. That sequence does not occur instantaneously, and for most cases, the tab remains open until you reply.

That page load can be cryptographically signed by the server to capture when it occurred. The submit button timer was locked for 15 seconds or whatever time was configured by the administrator.

I provided one simple timing heuristic that was used in production with real people. It was not noticed by most as the countdown elapsed before they clicked 'Post reply', did not "fall apart" there and would not here.
Your entire argument assumes the problem is lazy AI spammers, which is the easy case. The hard case, and the one that actually causes damage, is real people getting wrongly accused. That's not theoretical. That's zappaDPJ in post #14, accused of being a bot at 66 years old. Your engineered solution didn't help him.
Your argument is that heuristics categorically don't work for AI posts (despite me explicitly stating that heuristics are used alongside other tools) because, "see, humans accuse other humans of being AI!", which makes no sense.
 
vbresults said:
Spam and AI-assisted posting are not mutually exclusive. The latter can produce confident-sounding, linguistically-coherent posts that mean nothing in reality. The quoted post is a prime example.
And calling my post "confident-sounding content that means nothing in reality" while spending two replies never pointing to a single factual error, that's not a rebuttal, that's just an opinion with no substance behind it. Which is exactly what I called out in Suzanne's case. Different person, same pattern.

vbresults said:
To reply to a post, you have to 1. open the thread, 2. read the post, and then 3. write the reply, regardless of how it is composed. That sequence does not occur instantaneously, and for most cases, the tab remains open until you reply.

That page load can be cryptographically signed by the server to capture when it occurred. The submit button timer was locked for 15 seconds or whatever time was configured by the administrator.

I provided one simple timing heuristic that was used in production with real people. It was not noticed by most as the countdown elapsed before they clicked 'Post reply', did not "fall apart" there and would not here.
You've now described a specific server-signed page load with a submit lock, which is a much narrower tool than what you originally implied. It still doesn't catch someone who opens the thread, reads it, drafts a response elsewhere over several minutes, then pastes it after the timer clears. That's not a bot, that's a human who uses AI as a drafting tool, exactly the case I described.

vbresults said:
Your argument is that heuristics categorically don't work for AI posts (despite me explicitly stating that heuristics are used alongside other tools) because, "see, humans accuse other humans of being AI!", which makes no sense.
I never argued heuristics categorically don't work. I argued they can't reliably distinguish a careful human from someone using AI assistance, those are different claims. On the "not mutually exclusive" point, sure, but that's not what I was talking about. The question is whether heuristics built for spam volume attacks can identify a real person using AI as a drafting tool. That's a completely different problem.
 
You've now described a specific server-signed page load with a submit lock, which is a much narrower tool than what you originally implied.
You cannot implement a timer without locking the submit button, otherwise a short reply could trigger a false positive, and without signing it the start time can be spoofed by the client. This is the minimum for a proper implementation, not a "narrow tool".
It still doesn't catch someone who opens the thread, reads it, drafts a response elsewhere over several minutes, then pastes it after the timer clears. That's not a bot, that's a human who uses AI as a drafting tool, exactly the case I described.
It catches humans who prompt, copy and paste as well as many bots. The timer signature could also be short-lived, reset frequently while the tab is open, stop when you tab out and reset when you tab back in.

Another heuristic could tie the timer length and signature to content length. Most compose long posts in the editor (zero impact to them), and those who do not just have to wait a few more seconds to submit.
I never argued heuristics categorically don't work. I argued they can't reliably distinguish a careful human from someone using AI assistance, those are different claims.
Ok. I am saying that heuristics do not have to make that distinction if implemented properly.
The question is whether heuristics built for spam volume attacks can identify a real person using AI as a drafting tool.
Most anti-spam heuristics I built were not for countering volume attacks.
 
Haven't come across member's using AI in posts, but in images. I haven't gone as far as saying "no AI" in forum rules but made it clear any photos created with AI needed to be under their own section and marked as created with AI. So they could do their creative stuff without confusing others it might be a real pet. For the monthly photo competition though it is strictly no AI or photoshopping etc.

The irony is - occasionally a member will say - Google AI gives this info. And the info is a quote from the forum lol! Which I point out!
 
Back
Top Bottom