UK Online Safety Regulations and impact on Forums

My draft risk assessment (Comments appreciated) CSEA = Child Sexual Exploitation and Abuse.

RiskRelevant Illegal ContentRisk LevelEvidence and ReasoningMitigation Measures
User Generated ContentHate Speech, Harassment, CSEA, Terrorism, etc.NegligibleUsers can post content, but the community is small and moderation carried out regularly. Evidence: Low volume of user reports, active (DBS checked) moderator presence, clear community guidelines. There have been no incidents in 17 years. Users engaging in harmful behaviour would be immediately banned and any identified illegal behaviour reported to law enforcement agencies.N/A
AnonymityHarassment, Trolling, Illegal Content SharingNegligibleUsers cannot post anonymously.N/A
User ConnectionsGrooming, Harassment, Coercive BehaviorLowUsers can connect, but the community is small and connections may be limited. Evidence: Low number of user-to-user connections.

Private messages are not available until users have posted publicly and known to have a legitimate interest in the forum topic as a professional, educator or hobbyist.

Nor are private messages available to children. With or without effective age verification this would include any potential groomer posing as a child.

A very obvious and simple to use effective private message report system is enabled and monitored regularly.
Monitor user interactions: Implement non-intrusive systems to detect and flag suspicious patterns of user interaction (e.g., excessive private messaging between adults and minors without infringing on privacy).

Implement blocking features: Allow users to block other users who engage in harmful behavior.

Educate users: Provide information and resources on online safety and how to identify and report grooming or coercive behavior.
Lack of Age VerificationCSEA, Exposure to Harmful ContentMediumAny content that is inappropriate for children is removed via regular monitoring or reports. Any users that post such content are subject to disciplinary action and, depending on the severity, would be banned and if content was deemed to be illegal would be immediately reported to law enforcement agencies.Consider age verification measures: Explore options for age verification (e.g., self-declaration, third-party verification services) while balancing privacy and accessibility concerns.
One very minor possible inconsistency:
You said private messages are not available to children, but in the mitigation measure column you mention "excessive private messaging between adults and minors" as a flag for suspicious behaviour.
 
Not a dumb question. I'm hoping so but I don't know for sure.
I'd second this.

I think right now until everything has had some time to settle down doing a reasonable job on the risk assessment, maybe tweaking the odd in-house policy and tools is going to be plenty. You have shown willing. Odds are if you've got a well run forum then you'll never hear anything more about it. I can't see Ofcom checking every domain/site in registered the UK to see if they do user to user content, let alone every site in the world that might have a "significant" UK audience.

The most likely vector for any woe is going to be a disgruntled forum member who get banned or warned or something and goes off moaning to Ofcom about your site. However Ofcom will contact you and you'll show that you've done a risk assessment and whatnot and that the "complaint" is just trash and nothing will happen, or maybe you'll get some extra advice and suggestions (and by this point odds are there will be more tested/ready add-ons and experience around the whole OSA). I think unless your site is a total cess pit for the most part common sense will probably win in reality.
 
Last edited:
I think it's fair to assume that OFCOM won't come looking for problem sites, there are other organizations already doing that. I also think it highly likely that the overwhelming majority of forums already operate within the guidelines as laid down by the Act which makes carrying out the required risk assessment a real pain in the neck especially when the guidance is so badly set out.

Assuming you are not running a site which includes content that requires age verification, the biggest concern has got to be private messaging. Unless you are reading private messages there's no easy way of knowing what those messages may contain. For example members swapping child pornography in images or links are not likely to be the members in need of an obvious ban. More often they will be longstanding members who are considered an asset to the forum. That's how these people operate. They need to be liked so that they can continue to do what they do unimpeded.

In short, can any forum owners here say categorically that private messaging is not being used for purposes to circumvent the law? I certainly can't because one of my forums was being used by a paedophile to groom young people via personal messaging. I only found out because one of his victims informed me and the fall out from that was legal, protracted and extremely unpleasant for everyone involved.

So what are our options? Option one would be to disable private messaging. Option two, make it clear that private messages are staff monitored and therefore not private. A third option would be to use AI which would go a long way to satisfy the Act as it currently stands.
 
I certainly can't because one of my forums was being used by a paedophile to groom young people via personal messaging. I
Are you able to say that AI would have caught that? Especially if they are clever and know how to make the grooming appear innocent?
 
That looks good but how would we find keywords that groomers would use?

That'll be the hard part. Anything related to age, location and specifics would be easy.


Some more here.
 
That'll be the hard part. Anything related to age, location and specifics would be easy.


Some more here.
https://www.vic.gov.au/working-with-children-check is what you need to look at.
It's a little card that tells people you aren't a kiddy fiddler and that you're working with kids.
All adults have them.
I have one because i umpire local footy which involves kids under the age of 18...
 
Are you able to say that AI would have caught that? Especially if they are clever and know how to make the grooming appear innocent?
I can say with some certainty that current AI tools would have been able to identify the problem before it was reported. That said I would also add it's somewhat ironic that AI has given these people powerful tools to aid them e.g. text to chat which enables an adult to more easily pose as a child engaging in sexually explicit chat.
 
One very minor possible inconsistency:
You said private messages are not available to children, but in the mitigation measure column you mention "excessive private messaging between adults and minors" as a flag for suspicious behaviour.
Good point, I'll just need to remove between adults and minors. So it would just be excessive private messaging. But a sad thing would be if a genuine romance had been struck up between adults. I wouldn't want to be snooping on that. I suppose one could argue that they should have got a room somewhere else.
 
https://www.vic.gov.au/working-with-children-check is what you need to look at.
It's a little card that tells people you aren't a kiddy fiddler and that you're working with kids.
All adults have them.
I have one because i umpire local footy which involves kids under the age of 18...

You want people to post an ID on a forum to show you are not a fiddler?

Thinking Think GIF by TLC Europe
 
I find it rather ironic that Meta in the guise of Facebook Messenger and Whatapp are able to continue to allow private messaging on the basis that it is technically not possible to look at the messages (because they are end to end encrypted - which you would think would actually increase the risks), but those of us with unencrypted messaging which the criminals would be less likely to use are being stopped unless we have age verification measures in place.

It does raise the question - can Xenforo (or a plugin) be used to encypt messaging so we can use the same defence? I suspect we would still run into trouble as we wouldn't be using an App and the encryption keys would need to be stored somewhere on the user's device which isn't going to work well for a browser based product, but I am no expert in this area.
 
can Xenforo (or a plugin) be used to encypt messaging
You can certainly do browser (client) based encryption if you so desired. You can even have client side certificates. Ages back they were more heavily used (memories...), but I think the overhead of managing them and general muddle has seen them rather fall by the wayside...

Not sure encypting client side technically absolves you of your OSA responsibilities however, I suspect a lot of money and lawyers absolve Meta and co of theirs ;) Has there been a statement about encrypted end to end chat and OSA? It's certainly interesting and I would guess there is some stuff in there about it.

Even with end-to-end to some extent you could do classic sigint on it - since in theory with Meta everyone is who they say they are (haha) you know when adults are chatting with children and you could see the frequency and times of the conversations if not their contents (or maybe things like whatsapp layer in noise and other obfuscation to prevent such things? - don't know).

Anyhow with the advent of all the client side scanning (especially with newer "AI" co-processors) client side encryption is somewhat damaged, since it turns out those that want to listen just have to plonk themselves into the chain earlier and since your hardware/OS (more so in the phone environment) is a bit of a closed black box there isn't a lot that can be done about it. Didn't Apple announce they were doing client side CSAM scanning a while back?

For the curious there is a whitepaper from whatsapp somewhere around that outlines how they do their group chat encryption - which of course is the harder part 1:1 and public/private keys is easy enough. 1:many would require engaging brain to develop a solution.

It's a little card that tells people you aren't a kiddy fiddler and that you're working with kids.
I fear the trouble is really the word "convicted" really belongs before "kiddy" in that sentence. Otherwise at least theoretically the problem does become easier although it'd just land you back into the entire verification cycle - you'd have to find companies that were setup to check credentials of your users. That has a cost. I suspect the only affordable solution there is simply an age check, which I fear many users will find intrusive enough. I can't see people wanting to submit to (and pay for) a full DBS check (uk equiv of your card) just to chat about XenForo for instance. You'd loose less users just saying "sorry no PMs". People would then end up chatting over other systems. XF already ships with fields for various chat names doesn't it? - ICQ, Skype and AIM and all those other popular services so users will find a way! :)

In short, can any forum owners here say categorically that private messaging is not being used for purposes to circumvent the law? I certainly can't because one of my forums was being used by a paedophile to groom young people via personal messaging. I only found out because one of his victims informed me and the fall out from that was legal, protracted and extremely unpleasant for everyone involved.
Yikes, not a nice situation to deal with.

So what are our options? Option one would be to disable private messaging. Option two, make it clear that private messages are staff monitored and therefore not private. A third option would be to use AI which would go a long way to satisfy the Act as it currently stands.
We already have some vague clause that says private messages are not private - basically saying that staff don't have casual access (I forget the phrasing we use) to messages, but they can be looked at. So not to assume they are 100% private.

I'm not 100% sure if I want to send off private user messages off to a third party for analysis yet (although I suppose they have agreements not to use submitted material for training?) and training a local LLM on "inappropriate material" doesn't sound like a sensible idea! Although I've not looked into running a local LLM, maybe it's moderately feasible and maybe some of the out of the box ones might work. I suspect at least for now I might just do some basic scanning on keywords and frequency and so forth and have senior admin staff keep an eye on things and see what tools everyone develops/uses!

I certainly think this thread has highlighted that some additional tools are needed around private messaging in XF, now as to those tools being core or add-ons and how optional and configurable they are all up for grabs. Off the top of my head (obviously all permission based) I guess some of those might be:
  • Ability for staff to read entire conversations without joining them
  • Use of moderation queue for PMs
  • Soft delete of PMs (ie still preserving evidence)
  • Mechanisms to tie into content scanning (I guess writing an add-on does this unless XF want to offer some generic framework akin to webhooks)
  • Some statistical analysis that might show up (bad) trends ahead of actually needing to read content
  • Further options on the privacy controls (ie things like who can start conversations with you, etc)?
 
For the curious there is a whitepaper from whatsapp somewhere around that outlines how they do their group chat encryption - which of course is the harder part 1:1 and public/private keys is easy enough. 1:many would require engaging brain to develop a solution.

Yes I had already found and skimmed that as I was wondering how the hell they did one to many encryption without storing/forwarding the message.

Not sure encypting client side technically absolves you of your OSA responsibilities however, I suspect a lot of money and lawyers absolve Meta and co of theirs ;) Has there been a statement about encrypted end to end chat and OSA? It's certainly interesting and I would guess there is some stuff in there about it.

I have read that much now that I don't know where I picked this from, but certainly the act has a getout where things don't have to be done if they are technically unfeasible and Ofcom have accepted that it is technically unfeasible to scan the encrypted messages. Obviously us doing the same is the wrong way to go about making children safer, but it is typical of the way laws like this drive things more underground and if the megacorps can do it why can't we?

You are correct that they will still use all of the metadata (that is how facebook make money after all) so they will know who messages who, when they messaged them and who is in the groups - that wouldn't be technically difficult for us to do the same, not so great morally though.
 
This may be a stupid question and may have already been answered but please bear with me as I am all over the place trying to sort things out.

I have a group of people who pay to use our site on a subscription basis. This subscription has been going for a number of years and is always paid for through paypal.
When you pay through paypal you have the choice of paying with a debit card, so obviously anyone can do that, but you can also pay with your paypal account.
When the paypal account is used, I get a notification that the payment was from a 'verified non us paypal account'. This is only avalable to people who are over 18.

1) Assuming it's from a verified account would I be correct in saying this is sufficient age verification?
2) Also some members of the forum have been on since 2007, again I assume I am ok to say these original members are over 18 years old?
 
1) Assuming it's from a verified account would I be correct in saying this is sufficient age verification?
Given that all we can do do is give our best advice based on somewhat crap info from Ofcom I will weigh in by saying that if what you need now is a risk assessment then I'd say that's a good bit of assumed mitigation. (Note the vagueness). What you could do is to ask PayPal, who are sometimes surprisingly helpful.

BUT

The main reason for age verification is not for hobby based forums but for porn sites, then unless you have a porn sit I would not worry at the moment.
2) Also some members of the forum have been on since 2007, again I assume I am ok to say these original members are over 18 years old?
That sounds reasonable unless their account was hijacked by a pedophile. Again, on your risk assessment it will look like negligible risk of them being a child.
 
Back
Top Bottom