Thank you! I had seen that before, just forgot!I posted mine earlier on the thread https://xenforo.com/community/threa...-impact-on-forums.227661/page-20#post-1733922
Not really possible without age verification? Which I'm certainly not going to pay for.Don’t allow children on your forum. Simple as that.
Not a dumb question. I'm hoping so but I don't know for sure.So, my next dumb question is: Is that all I need?
One very minor possible inconsistency:My draft risk assessment (Comments appreciated) CSEA = Child Sexual Exploitation and Abuse.
Risk Relevant Illegal Content Risk Level Evidence and Reasoning Mitigation Measures User Generated Content Hate Speech, Harassment, CSEA, Terrorism, etc. Negligible Users can post content, but the community is small and moderation carried out regularly. Evidence: Low volume of user reports, active (DBS checked) moderator presence, clear community guidelines. There have been no incidents in 17 years. Users engaging in harmful behaviour would be immediately banned and any identified illegal behaviour reported to law enforcement agencies. N/A Anonymity Harassment, Trolling, Illegal Content Sharing Negligible Users cannot post anonymously. N/A User Connections Grooming, Harassment, Coercive Behavior Low Users can connect, but the community is small and connections may be limited. Evidence: Low number of user-to-user connections.
Private messages are not available until users have posted publicly and known to have a legitimate interest in the forum topic as a professional, educator or hobbyist.
Nor are private messages available to children. With or without effective age verification this would include any potential groomer posing as a child.
A very obvious and simple to use effective private message report system is enabled and monitored regularly.Monitor user interactions: Implement non-intrusive systems to detect and flag suspicious patterns of user interaction (e.g., excessive private messaging between adults and minors without infringing on privacy).
Implement blocking features: Allow users to block other users who engage in harmful behavior.
Educate users: Provide information and resources on online safety and how to identify and report grooming or coercive behavior.Lack of Age Verification CSEA, Exposure to Harmful Content Medium Any content that is inappropriate for children is removed via regular monitoring or reports. Any users that post such content are subject to disciplinary action and, depending on the severity, would be banned and if content was deemed to be illegal would be immediately reported to law enforcement agencies. Consider age verification measures: Explore options for age verification (e.g., self-declaration, third-party verification services) while balancing privacy and accessibility concerns.
I'd second this.Not a dumb question. I'm hoping so but I don't know for sure.
Are you able to say that AI would have caught that? Especially if they are clever and know how to make the grooming appear innocent?I certainly can't because one of my forums was being used by a paedophile to groom young people via personal messaging. I
Are you able to say that AI would have caught that? Especially if they are clever and know how to make the grooming appear innocent?
That looks good but how would we find keywords that groomers would use?You can use https://xenforo.com/community/resources/conversation-monitor.6715/
If a target keyword gets a hit, it goes into a moderation queue for approval or deletion. It's rather useful. It does require an invite from Xon for another addon but that isn't an issue. I would recommend people use this.
That looks good but how would we find keywords that groomers would use?
https://www.vic.gov.au/working-with-children-check is what you need to look at.That'll be the hard part. Anything related to age, location and specifics would be easy.
![]()
AFP releases glossary of terms used by some sex predators to groom children | Australian Federal Police
The AFP is today releasing a glossary of acronyms and emojis that can be used by child predators who engage in sexualised communication online and via text message. The glossary has been developed from information contained in investigations and reports undertaken by the AFP-led Australian...www.afp.gov.au
Some more here.
I can say with some certainty that current AI tools would have been able to identify the problem before it was reported. That said I would also add it's somewhat ironic that AI has given these people powerful tools to aid them e.g. text to chat which enables an adult to more easily pose as a child engaging in sexually explicit chat.Are you able to say that AI would have caught that? Especially if they are clever and know how to make the grooming appear innocent?
Good point, I'll just need to remove between adults and minors. So it would just be excessive private messaging. But a sad thing would be if a genuine romance had been struck up between adults. I wouldn't want to be snooping on that. I suppose one could argue that they should have got a room somewhere else.One very minor possible inconsistency:
You said private messages are not available to children, but in the mitigation measure column you mention "excessive private messaging between adults and minors" as a flag for suspicious behaviour.
https://www.vic.gov.au/working-with-children-check is what you need to look at.
It's a little card that tells people you aren't a kiddy fiddler and that you're working with kids.
All adults have them.
I have one because i umpire local footy which involves kids under the age of 18...
You can certainly do browser (client) based encryption if you so desired. You can even have client side certificates. Ages back they were more heavily used (memories...), but I think the overhead of managing them and general muddle has seen them rather fall by the wayside...can Xenforo (or a plugin) be used to encypt messaging
I fear the trouble is really the word "convicted" really belongs before "kiddy" in that sentence. Otherwise at least theoretically the problem does become easier although it'd just land you back into the entire verification cycle - you'd have to find companies that were setup to check credentials of your users. That has a cost. I suspect the only affordable solution there is simply an age check, which I fear many users will find intrusive enough. I can't see people wanting to submit to (and pay for) a full DBS check (uk equiv of your card) just to chat about XenForo for instance. You'd loose less users just saying "sorry no PMs". People would then end up chatting over other systems. XF already ships with fields for various chat names doesn't it? - ICQ, Skype and AIM and all those other popular services so users will find a way!It's a little card that tells people you aren't a kiddy fiddler and that you're working with kids.
Yikes, not a nice situation to deal with.In short, can any forum owners here say categorically that private messaging is not being used for purposes to circumvent the law? I certainly can't because one of my forums was being used by a paedophile to groom young people via personal messaging. I only found out because one of his victims informed me and the fall out from that was legal, protracted and extremely unpleasant for everyone involved.
We already have some vague clause that says private messages are not private - basically saying that staff don't have casual access (I forget the phrasing we use) to messages, but they can be looked at. So not to assume they are 100% private.So what are our options? Option one would be to disable private messaging. Option two, make it clear that private messages are staff monitored and therefore not private. A third option would be to use AI which would go a long way to satisfy the Act as it currently stands.
For the curious there is a whitepaper from whatsapp somewhere around that outlines how they do their group chat encryption - which of course is the harder part 1:1 and public/private keys is easy enough. 1:many would require engaging brain to develop a solution.
Not sure encypting client side technically absolves you of your OSA responsibilities however, I suspect a lot of money and lawyers absolve Meta and co of theirsHas there been a statement about encrypted end to end chat and OSA? It's certainly interesting and I would guess there is some stuff in there about it.
Provided you don’t try to pretend otherwise, why would it not be great morally?not so great morally though
Given that all we can do do is give our best advice based on somewhat crap info from Ofcom I will weigh in by saying that if what you need now is a risk assessment then I'd say that's a good bit of assumed mitigation. (Note the vagueness). What you could do is to ask PayPal, who are sometimes surprisingly helpful.1) Assuming it's from a verified account would I be correct in saying this is sufficient age verification?
That sounds reasonable unless their account was hijacked by a pedophile. Again, on your risk assessment it will look like negligible risk of them being a child.2) Also some members of the forum have been on since 2007, again I assume I am ok to say these original members are over 18 years old?
We use essential cookies to make this site work, and optional cookies to enhance your experience.