How do you handle members who use AI in their posts?

island

Active member
Licensed customer
Seeing an increase in the number of posts that have AI parts in them. Still not huge, but I suspect it will become an increasing thing to think about.

Should members who use AI to compose their post be required to put the AI generated parts in some kind of AI-quote tag?

In the longer term, as people start using AI more and more to generate all their writing, how will this affect forums and the human connection?
 
Seeing an increase in the number of posts that have AI parts in them. Still not huge, but I suspect it will become an increasing thing to think about.

Should members who use AI to compose their post be required to put the AI generated parts in some kind of AI-quote tag?

In the longer term, as people start using AI more and more to generate all their writing, how will this affect forums and the human connection?
For my own forum, everyone's email addresses have to be legit and the same with the accounts. If they're posting AI they're not real people they're just bots and they're treated like scammers and banned.
 
Last edited:
Clear policies is the first step - let your members know what is expected and what is not permitted.

While I do have a policy forbidding any AI generated content, sometimes it is useful to have an AI summary of something - provided that it is clearly documented as being AI generated. I was actually thinking today that some type of "AI-quote" tag might be useful for this situation.
 
It's already a miracle that users post in forums. I'd say don't overwhelm them with unnecessary rules...
In the end, it's still a discussion or a post, and it's definitely more accurate since it's written by artificial intelligence based on what the user wanted to say...
 
I was actually thinking today that some type of "AI-quote" tag might be useful for this situation.
Although it’s easy enough with custom bbcode I think this would make a god suggestion.

and it's definitely more accurate since it's written by artificial intelligence based on what the user wanted to say...
Unless it isn’t. It are you saying ai never hallucinates or regurgitates false information?
 
Only ~community I have will likely have an issue with people posting blatant AI slop, and only one has so far and they basically got mass ignored because of it.

On other forums, I am just ignoring people who just blatantly post using AI, because they almost never provide any value to discussions in the first place. Until XF has guidelines for AI usage on add-ons, I am also ignoring most devs I think are just vibe coding add-ons as well.
 
  • Like
Reactions: ENF
I don't mind AI-assisted posts too much, as long as the post is of high quality. But I don't like the blatant use of AI where the user obviously didn't review/edit what they posted before they posted it. Like @Forsaken said, the key word is blatant. If you're thinking "this is AI talking" when reading instead of "this is the user responding" then that's a good sign that the post is "low effort" or "low value." A basic rule to use for this might be:
  • Be yourself:
    • We want to hear from you! That's why we're on this forum not asking Gemini, etc; therefore, make sure it's still you talking in the posts. Using some AI to help write is permitted, but blatantly low-effort robot posts may be removed by staff.
 
LLMs are an internet revolution, they've already created the best movies and books and games, shouldn't you be celebrating each new AI post as a gift from the singularity?
 
You can make mistakes that are certainly 99% less than the mistakes a user can make...
A person making a mistake can take accountability for their mistake and hopefully learn from them; AI cannot take accountability (and the companies behind AI models avoid accountability like the plague) and does not learn from it's mistakes. Comparing the two is nonsensical.
 
You can make mistakes that are certainly 99% less than the mistakes a user can make...
Less or fewer?

Ser Davos Jon Snow GIF by BuzzFeed
 
You can make mistakes that are certainly 99% less than the mistakes a user can make...
It would help to remember that LLM's are just statistics under the hood and are all setup to not disappoint the end user I.E: If an llm can't find you a correct answer to your question they'll give you any answer as to never come back with their hands empty, unless specified that you are okay with that.
 
Seeing an increase in the number of posts that have AI parts in them.
Here's a thing. Recently I've found myself being accused of being an AI bot, not bad for a semi-literate 66 year old who left school by mutual agreement at the age of 14.

So my question to you is how do you determine that posts have 'AI parts in them'?

Should members who use AI to compose their post be required to put the AI generated parts in some kind of AI-quote tag?
I highly doubt if making it a requirement would be followed or is in anyway enforceable but having an AI quote button in the editor toolbar is definitely worth consideration.
 
We have one user who spews AI. You can tell because those are the parts of their posts that are actually readable. Their own stuff is kind of short, choppy, and sentence-free. We aren't fussing too much about it. Before AI, they would spew text from favourite Biblical commentary sites so spewing LLM generated text is just a different version of their usual MO. Other users have posted AI stuff when we were specifically talking about AI or have been good about flagging the source. So, again, no particular concerns. We are like a stereotypical small town nowadays: small and everybody knows everybody so I don't think we would see a problem unless we had an influx of new posters.

The site where I post my NSFW writing (yes, I write smut) is having major headaches of late, with people clogging up the story moderation queues with AI slop. There's a specific policy of not accepting AI-generated stories but that doesn't seem to stop people from trying. Not Xenforo, though.
 
Most AI slop posts I've seen have multiple paragraphs and the hallmark list breakdown format.

One solution might be to, if the post is longer than X paragraphs or matches this pattern, show a warning that says AI slop is forbidden, and that while the post will go through, it will be flagged for moderators to manually review.

Reddit has a system like this for spam prevention and moderation.
 
Last edited:
Showing a warning message based on a pattern match of the AI breakdown format, if it can be done, might be a good option as it would encourage posters to at least closely read and consider the AI they are using before posting, which insures the quality is there.
 
You can never fully determine whether a post was written by AI, and that's unlikely to change. The most you can do is look for statistical patterns across a user's post history. Do they consistently use em dashes? Are their apostrophes always typographically "smart"? ChatGPT, for instance, outputs curly apostrophes by default, which is a subtle but sometimes detectable tell.

But even that only gets you so far. No detection method can be both precise and reliable at the same time, you'll always be trading off false positives against false negatives. Flag too aggressively and you'll wrongly accuse genuine human writers. Be too lenient and bots slip through. Some people naturally write in a structured, formal, almost clinical style, the kind that AI detectors associate with generated text. Others have been writing on the internet long enough to have absorbed those patterns organically.

Formatting heuristics suffer the same problem. Bullet points, numbered lists, hedged language, and balanced paragraph lengths are common AI tells, but they're also just... good writing habits. You can't penalize someone for being organized.

Also, I wanted to point out that equating AI-generated with low quality is a flawed premise. Humans write poor posts all the time, and someone who uses AI as a drafting tool but actually reviews and edits the output can produce something genuinely valuable. What people really object to isn't the quality, it's the lack of authentic engagement. Those are two different things, and conflating them leads to exactly the kind of false assumptions that get real humans flagged as bots.
 
You can never fully determine whether a post was written by AI, and that's unlikely to change. The most you can do is look for statistical patterns across a user's post history. Do they consistently use em dashes? Are their apostrophes always typographically "smart"? ChatGPT, for instance, outputs curly apostrophes by default, which is a subtle but sometimes detectable tell.

But even that only gets you so far. No detection method can be both precise and reliable at the same time, you'll always be trading off false positives against false negatives. Flag too aggressively and you'll wrongly accuse genuine human writers. Be too lenient and bots slip through. Some people naturally write in a structured, formal, almost clinical style, the kind that AI detectors associate with generated text. Others have been writing on the internet long enough to have absorbed those patterns organically.

Formatting heuristics suffer the same problem. Bullet points, numbered lists, hedged language, and balanced paragraph lengths are common AI tells, but they're also just... good writing habits. You can't penalize someone for being organized.

Also, I wanted to point out that equating AI-generated with low quality is a flawed premise. Humans write poor posts all the time, and someone who uses AI as a drafting tool but actually reviews and edits the output can produce something genuinely valuable. What people really object to isn't the quality, it's the lack of authentic engagement. Those are two different things, and conflating them leads to exactly the kind of false assumptions that get real humans flagged as bots.
🧐
 
You can never fully determine whether a post was written by AI, and that's unlikely to change. The most you can do is look for statistical patterns across a user's post history. Do they consistently use em dashes? Are their apostrophes always typographically "smart"? ChatGPT, for instance, outputs curly apostrophes by default, which is a subtle but sometimes detectable tell.

But even that only gets you so far. No detection method can be both precise and reliable at the same time, you'll always be trading off false positives against false negatives. Flag too aggressively and you'll wrongly accuse genuine human writers. Be too lenient and bots slip through. Some people naturally write in a structured, formal, almost clinical style, the kind that AI detectors associate with generated text. Others have been writing on the internet long enough to have absorbed those patterns organically.

Formatting heuristics suffer the same problem. Bullet points, numbered lists, hedged language, and balanced paragraph lengths are common AI tells, but they're also just... good writing habits. You can't penalize someone for being organized.

Also, I wanted to point out that equating AI-generated with low quality is a flawed premise. Humans write poor posts all the time, and someone who uses AI as a drafting tool but actually reviews and edits the output can produce something genuinely valuable. What people really object to isn't the quality, it's the lack of authentic engagement. Those are two different things, and conflating them leads to exactly the kind of false assumptions that get real humans flagged as bots.
This looks like something an AI bot would post up.
 
  • Like
Reactions: CTS
Back
Top Bottom