Spam filter needs to be reworked to behave differently for conversations vs public content - comments/threads.

MaximilianKohler

Active member

Summary of problem:​

Someone on my forum is getting the error "Your content can not be submitted. This is likely because your content is spam-like or contains inappropriate elements. Please change your content or try again later. If you still have problems, please contact an administrator." when trying to send a message.

This is not just an Akismet problem. If the conversation message has any of the spam words in /admin.php?options/groups/spam/ that will cause it too. This is a major problem in my opinion that should be fixed ASAP.

I'm using that Spam phrases list as a "preapprove filter". If no conversation messages will go through then it makes it completely useless.

This is really surprising that it currently functions like this. I'm very curious as to how other people are managing spam and filters because to me it seems to make the spam filter and Akismet completely useless.

You're probably familiar with Reddit's automod filtering options? They seem to be much better than Xenforo's.

Possible solutions:​

Conversations need to be treated differently from public comments.

This addon https://xenforo.com/community/resources/ozzmodz-spam-phrases-report.8968/ gave me the idea of reporting the content instead of disallowing it.
That may be ok (though not ideal) for conversations, but spam matches for comments/threads should still be filtered to the mod queue.

Current functionality: filters public content, denies private messages.

1. The most ideal solution would be to filter (instead of deny) private messages into the mod queue, just like with public content.
2. The next best option would be to report messages and filter public content.
3. The next best would be to add a checkbox to only apply the spam filter to public content. Then we could pair that with the addon above to report PMs.
3.5. OR, create a separate spam filter for conversations.

For now, the only "solution" I see is to filter all content to the mod queue for preapproval...
 
Last edited:
Upvote 5
This is really surprising that it currently functions like this.
I get your point but most forums would set the spam phrases to only be activated until x number of posts, by which time you should be confident the user is not a spammer and it no longer applies.

Or is this something else that akismet is doing not related to the defined spam phrases?
 
I checked the spam trigger log and it says "Spam phrase matched (keyword), Akismet matched". So it seems that it both matched my keyword filter and some Akismet trigger. Either one/both are problematic. I've had to disable both completely.

most forums would set the spam phrases to only be activated until x number of posts
Well, that answers my question about how people are using the current system, but I think that's not a good way to do things. There are certain websites/phrases that I would want to be filtered (or at least reported automatically) indefinitely. And this person was a new user, and their message was fine, and there is no way for me to allow their message other than to completely disable the spam filter.
 
1. The most ideal solution would be to filter (instead of deny) private messages into the mod queue, just like with public content.
I'd be totally against this as conversations (or with XenForo 2.3+: Direct Messages) are not public content.
Nobody except the participants should have access to the content.

2. The next best option would be to report messages and filter public content.
Almost as bad as the first option :)

3. The next best would be to add a checkbox to only apply the spam filter to public content.
That would be accaptable, either as
  1. An option to disable all content spam checkers for conversations
  2. An option to disable certain content checkers
  3. Options for individual content checkers to apply different rules for conversations
 
Nobody except the participants should have access to the content.
Almost as bad as the first option
I don't think this makes sense. A forum has no expectation of privacy from the forum owners. People can already report PMs; this just automates it. Site administrators should have the ability to police content on their website.

It's completely expected that admins of large social media companies can read your PMs on their website and take action if you're doing something they don't like. Why would it be any different for forums? Forums aren't Signal. If you want secure messaging use Protonmail & Signal, not someone else's forum.

One thing you may want to do is allow moderators to police public content but not PMs. So only admins can see reported PM content.
 
People can already report PMs.
Users can (if they have permissin to do so) also invite further users to join a conversation.
If a users reports a conversation message this is pretty much the same - it is their decision to extend access to the content and only this user decision allows staff access to (previously) private content.

Site administrators should have the ability to police content on their website.
Public content - absolutely.
Private content - only if desired by participants or legally required.

It's completely expected that admins of large social media companies can read your PMs on their website and take action if you're doing something they don't like.
No.
 
Maybe my sites are too small, but I don't use the akismet or spam words functions at all. I just use Ozzmods Spaminator mods and I have no issues.
 
Maybe my sites are too small, but I don't use the akismet or spam words functions at all. I just use Ozzmods Spaminator mods and I have no issues.

Some administrators simply overdo it when it comes to spam filters.
We have exactly 3 words in the spam filter.

If administrators or moderators think they have to read pm's, something is wrong in their perception.

I don't know where you got this information. But that is definitely not the case.
If your users ever notice that you read their pm's they will probably be disappointed.

I'm with @Kirby PM's should and must remain private as far as possible. Only if you are invited as an administrator or a pm is reported you should get access.
The latter happens to us 1 or 2 times a year. But then it's exclusively about insults, hostility or something similar.
 
If administrators or moderators think they have to read pm's, something is wrong in their perception.
There are a huge variety of uses for forums. Admins have an interest in preventing spam, scammers, illegal content, racism, and more. All big social media companies would screen PMs for these things.
 
There are a huge variety of uses for forums.
Yes the same on our site.
Admins have an interest in preventing spam, scammers, illegal content, racism, and more.
Only if it is public.
Your members can report all this at any time. Why do you think you have to protect them? Aren't they adults?
I rather have the impression that you put everyone under general suspicion.
All big social media companies would screen PMs for these things.
Yes, in China for sure.
 
Only if it is public
Nonsense. People don't get to use my website for whatever purpose they want. There are a wide variety of activities that I do not want occurring on my site. I'm more likely to disable PMs completely than allow them unrestricted. That shouldn't be the choice we have (all or nothing).

Yes, in China for sure.
Seriously? You you think you can do anything you want on Twitter, Instagram Facebook, etc. via DMs? I know from experience that you cannot. It would be a hellscape if you could.
 
I'm more likely to disable PMs completely than allow them unrestricted.
That's fine and I'd encourage you to do so as this doesn't impact your users privacy.

You you think you can do anything you want on Twitter, Instagram Facebook, etc. via DMs?
I don’t know what you can or cannot do via "direct messages" on big social media platforms.

What I do know is that I don’t actively use those due to privacy concerns.
 
Nonsense. People don't get to use my website for whatever purpose they want. There are a wide variety of activities that I do not want occurring on my site. I'm more likely to disable PMs completely than allow them unrestricted. That shouldn't be the choice we have (all or nothing).
How do you actually know what the users write there? Do you just think that or do you check it?
Or have there been one or two cases where someone reported a conversation and that's how you got the information?

Almost nobody wants hate speech or illegal things in their forum. I understand that very well. But what the users write in DM is beyond my knowledge. As I understand it, we are not responsible for that, because we can't control the content because we don't have access. If you circumvent the trust of your users, it can end badly.


Seriously? You you think you can do anything you want on Twitter, Instagram Facebook, etc. via DMs? I know from experience that you cannot. It would be a hellscape if you could.
I can only talk about Facebook. In DM I could talk about everything there, no matter what it was about.
 
If you circumvent the trust of your users
I have never assumed that my PMs on forums were ever hidden from the forum owners. That doesn't seem like a logical assumption. They are simply private from public view. Similarly, I would never assume that of twitter, or reddit, or any other major site. Email seems far more private, yet I learned some years ago that even our emails get read by the host (gmail, etc.).

Unless a service is explicitly advertised as end-to-end encrypted, like Signal and Protonmail, I wouldn't assume they were completely private.

Even without automatic reporting of words/phrases in PMs, my understanding is that Xenforo PMs are not private/secure from the forum owner; they're accessible via the database; it just requires more technical expertise to view them.
 
Even without automatic reporting of words/phrases in PMs, my understanding is that Xenforo PMs are not private/secure from the forum owner; they're accessible via the database; it just requires more technical expertise to view them.
Of course not.
But you have to become active. So without searching active in the database you can't read the DMs. And that's what's reprehensible when you do that.
It would only be fair to your users that you communicate exactly that.
 
If a users reports a conversation message this is pretty much the same - it is their decision to extend access to the content and only this user decision allows staff access to (previously) private content.
To play devils advocate here: This breaches the privacy of the other participant(s) of the conversation. Its against their decision. If you feel that users should have privacy in conversations, then there should also not be reporting of conversations.

The thing is that this perception of privacy will be abused for serious unethical and illegal purposes, as well as spammers, scammer, harassment, stalking and worse. In my experience (also depends on the niche) taking your hands off conversations and allowing anything in there unless its reported can have dire consequences. About once a year I get alerted to a case of very serious stalking of vulnerable members. I hate to go trough conversations, but in such case its absolutely needed.
A keyword trigger is really useful for conversations if you use very specific phrases that come up in very serious situations. With serious I mean cases where you would not think twice about banning the offender and you may even want to alert the police.

In my view a community must also be a relatively safe space for vulnerable people where they can trust YOU as the admin that you will take care they will not be abused or taken advantage of by monsters. I also think of communities as the admin's house and responsibility (to a degree) and the admin has a duty of care to make sure that there are no serious unethical and illegal things going on within your property, including conversations.

A full hands-off approach will (at least in some niches or board sizes) invite serious unethical or highly illegal behavior and by taking such hands-off approach the admin is the cause of this and may even be liable for it.

I'm in no way saying that admins should start reading random conversations. When I read a flagged conversation I hate this part of the job. I find it highly demotivating and can fully understand the feeling expressed against breaching the privacy of members. I would like to have nothing to do with it. But unfortunately I don't think its a realistic stance to take a hands-off approach and allow situations to start, fester and continue on your premises and on your watch.

I'd love to see better solutions for this problem. I have thought about feeding AI all conversations so that AI can report it when things go really bad. But AI itself is a privacy issue.

Thoughts about this are more than welcome. I truly hate this part of forum administration.
 
If you feel that users should have privacy in conversations, then there should also not be reporting of conversations.
Well ... I tend to agree and disagree :)

If you go that far you would also have to remove the possibility to forward emails for example.
Or to print them, to copy & paste them, ...

A user that participates in a conversation is a legitimate recipient of the information posted in the conversation.

It would be technically impossible to prevent participants from sharing their knowledge of the content with others (via print, copy & paste, etc.)

So could that (or reporting a conversation message) violate privacy / personal rights of other participants?
For sure it could.

But as said before, you have to draw a line somewhere.
If a report is justified there would be a collission of two rights - violation of privacy of the reported conversation message poster vs. the reporters rights that got violated by that message, normally the latter should outweigh the first.
If the report is not justified it would be the reporter who violated the reportees rights.

About once a year I get alerted to a case of very serious stalking of vulnerable members. I hate to go trough conversations [....], but in such case its absolutely needed.
A keyword trigger is really useful for conversations if you use very specific phrases that come up in very serious situations.

[..]

A full hands-off approach will (at least in some niches or board sizes) invite serious unethical or highly illegal behavior and by taking such hands-off approach the admin is the cause of this and may even be liable for it.

[..]

When I read a flagged conversation I hate this part of the job.

[...]

Fully understand the feeling expressed against breaching the privacy of members. I would like to have nothing to do with it.

If you understand the concerns about privacy and hate what you are doing, why don't you just stop doing it and only do what is legally required?

This is what I would do:
I would not touch conversation messages that haven't been reported with a 10 feet pole.

Or I would turn off conversations if they "can't be used safely" without full 1984 style surveillance.

If you think that both options are not acceptable and a "full airbag protection" for your members is necessary you could communicate very open what you are doing, eg. show an information block like
Conversations are not public, but they are not private either.
We preserve the right to automatically scan all content posted in conversations for potential violations of our terms of service or applicable laws.
We will perform a human review of any flagged content.
You can find more information in our privacy policy.
within conversations, before starting them, etc.

With this info in place i'd be fine with monitoring conversations.

Just my 0,02 €
 
Last edited:
I agree with @Alpha1. I have no interest in reading PMs. I have an interest in preventing bad actors from using my website for their purposes.

The standard terms already make it clear that it's not private:
All content you submit, upload, or otherwise make available to the Service ("Content") may be reviewed by staff members. All Content you submit or upload may be sent to third-party verification services (including, but not limited to, spam prevention services). Do not submit any Content that you consider to be private or confidential.

If you advertise your forum to be a completely private and secure messaging system, then that's different. But the vast majority of forums do not do that.
 
just to bring this problem back up, did we get a solution for this Conversation spam issue ?
to in effect tone down the spam controls for conversations, whilst it is a rare issue, it still can cause the occasion issue
 
Top Bottom