UK Online Safety Regulations and impact on Forums

I think ultimately it's up to you, however both of those sound like totally reasonable basis for classifying someone as over 18 in my humble opinion. You can put some of your assumptions and essentially what you've written above in your risk assessment. Remember unless you are mitigating a risk by limiting features to under/over 18 it's only sites with actual pornographic content on that need to do strict age checks.

On my smaller forum I was certainly going to manually mark in all the members who I've personally met and know they have far too much grey to be under 18 so a similar maybe "non standard" approach.

EDIT - cross post - what Mr Lucky said!
 
I find it rather ironic that Meta in the guise of Facebook Messenger and Whatapp are able to continue to allow private messaging on the basis that it is technically not possible to look at the messages (because they are end to end encrypted - which you would think would actually increase the risks), but those of us with unencrypted messaging which the criminals would be less likely to use are being stopped unless we have age verification measures in place.

It does raise the question - can Xenforo (or a plugin) be used to encypt messaging so we can use the same defence? I suspect we would still run into trouble as we wouldn't be using an App and the encryption keys would need to be stored somewhere on the user's device which isn't going to work well for a browser based product, but I am no expert in this area.
It should be possible to turn off conversations and instead offer integration with an encrypted app like Signal, Threema, etc. This would offload conversations to a different platform. But as you say it would increase risk because you would no longer have the option to review such messages in cases of extreme abuse and urgency.
 
That sounds reasonable unless their account was hijacked by a pedophile. Again, on your risk assessment it will look like negligible risk of them being a child.
If an account can get hacked by a pedophile, then doesn't it make age verification a bit of a pointless exercise?
 
Regarding this pointless exercise, I need to get moving on it because the bill must be taking effect real soon now, if it hasn't already kicked in. Given that, do I need to send them anything, or is it just a question of doing a written audit for your own site, and then storing that? Yes, I am aware of the need to act fast on people reporting complaints, but aside from that. I feel a bit overwhemled by it all.
 
I'm just catching up with this and read some of this thread, but apologies if I repeat anything. I'm aware of one forum who has turned off direct messaging to all members. Is that really necessary?

I have two main forums. One is about 850 members but quite quiet generally and most members know each other. Age limit is 13 - pet forum.

The other has nearly 3000 members and while some "know" each other, there are new members joining daily. Members are all "parents" so age limit could be variable! Currently it's set at the standard age 13. There are 15 year old parents! So it's not limited to adults only although there is a rule for anonymous usernames on that forum and no identifying info posted, due to legal matters being discussed.

Had a brief look at the guidance and it seems the main thing is to update the terms of service to say what measures will be taken to prevent underage access. Any tips there? Just add a bit saying anyone suspected to be under x age will be automatically banned?

Other than that is it simply that you have to have moderation in force?

What do others think about shutting down direct messaging? Except to admin?

Do we need to have a "published" risk assessment anywhere on the forum?

Thanks

Just been looking at the published guidance here. It seems to mainly refer to social media sites and maybe larger forums depending on what the topic is?

 
Last edited:
Also what about video linking? On one forum most members link their own youtube videos. Could that be seen as members accessing ANYTHING on youtube?!!
 
Having gone through this Ofcom check process, my forums aren't exempt (I don't suppose many will) so I'm raising the age limit to 18 and turning off direct messaging. One forum isn't really affected if I turn off direct messaging - the other one will be, so I'm thinking about that.

And completing a risk assessment using the downloadable Ofcom form.

Does "file sharing" include posted photos and videos?

 
Last edited:
Had a brief look at the guidance and it seems the main thing is to update the terms of service to say what measures will be taken to prevent underage access.
The act, the guidance for compliance and the tools that are designed help are in my opinion badly created, horribly over-verbose and by OFCOM's admission still a work in progress. This seems to be leading to a lot of confusion.

One of the aims of the act is to prevent children from having access to content that is not age appropriate. Therefore it follows if your forum's content including private messages, downloads and direct links is deemed suitable (by OFCOM's standards) for children then you have no need to prevent their access.

If on the other hand your forum contains pornography for example, you would need to take steps to prevent underage access.

Just to throw a little fuel on the fire, what seems to have gone largely unnoticed is while Child Sexual Abuse Material (CSAM) has been the main focus here and pretty much everywhere else, it's actually just one of 130 'priority offences' site owners are expected to assess and take steps to mitigate the risks. These priority offences have been placed into the following categories:

Terrorism
Harassment, stalking, threats and abuse offences
Coercive and controlling behaviour
Hate offences
Intimate image abuse
Extreme pornography
Child sexual exploitation and abuse
Sexual exploitation of adults
Unlawful immigration
Human trafficking
Fraud and financial offences
Proceeds of crime
Assisting or encouraging suicide
Drugs and psychoactive substances
Weapons offences (knives, firearms, and other weapons)
Foreign interference
Animal welfare

OFCOM has published a 480 page Register of Risks should you fancy some light reading...
 
Last edited:
I'm not 100% sure if I want to send off private user messages off to a third party for analysis yet (although I suppose they have agreements not to use submitted material for training?) and training a local LLM on "inappropriate material" doesn't sound like a sensible idea! Although I've not looked into running a local LLM, maybe it's moderately feasible and maybe some of the out of the box ones might work. I suspect at least for now I might just do some basic scanning on keywords and frequency and so forth and have senior admin staff keep an eye on things and see what tools everyone develops/uses!

I came across this video today. Show's how you can install Ollama 3.2, 1 Billion parameter version locally on a $10 VPS.

It's a tad slow, but would probably work really well for locally analysing messages to decide whether or not they should be put into moderation.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

 
The act, the guidance for compliance and the tools that are designed help are in my opinion badly created, horribly over-verbose and by OFCOM's admission still a work in progress. This seems to be leading to a lot of confusion.

One of the aims of the act is to prevent children from having access to content that is not age inappropriate. Therefore it follows if your forum's content including private messages, downloads and direct links is deemed suitable (by OFCOM's standards) for children then you have no need to prevent their access.

If on the other hand your forum contains pornography for example, you would need to take steps to prevent underage access.

Just to throw a little fuel on the fire, what seems to have gone largely unnoticed is while Child Sexual Abuse Material (CSAM) has been the main focus here and pretty much everywhere else, it's actually just one of 130 'priority offences' site owners are expected to assess and take steps to mitigate the risks. These priority offences have been placed into the following categories:

Terrorism
Harassment, stalking, threats and abuse offences
Coercive and controlling behaviour
Hate offences
Intimate image abuse
Extreme pornography
Child sexual exploitation and abuse
Sexual exploitation of adults
Unlawful immigration
Human trafficking
Fraud and financial offences
Proceeds of crime
Assisting or encouraging suicide
Drugs and psychoactive substances
Weapons offences (knives, firearms, and other weapons)
Foreign interference
Animal welfare

OFCOM has published a 480 page Register of Risks should you fancy some light reading...
Thanks. It is certainly verbose! And while the content is suitable for children, I assumed it included risks such as spammers posting unsuitable images.

I've just activated Cloudflare CASM scanning.

I've downloaded the Ofcom pdf for completion (presumably this is the risk assessment) and it's extremely long! And yes it has all the categories. Although after filling in the Ofcom check thing, it seems to think only 17 categories apply to my site.

I think the big thing is what checks can you implement to prove people aren't under age - saw that mentioned earlier. What is the consensus?

I've just put in the risk assessment that anyone considered to be under 18 will be deleted.

I just think if the age limit is 18 and over and there is no direct messaging, then it leaves you less at risk.

Haven't even started on my second forum yet as at the moment, that thrives on direct messaging!

I'm considering on that one, setting up a premium private section instead of direct messaging groups.

So a question there. If a forum is private rather than public, does the online safety act still apply?!! I guess it does.

And there's this

Ofcom fine.webp
 
Last edited:
I think we could do with an addon for age verification! Been googling and there are Face ID verifications and QR code ones but no idea how to implement them into a forum. Could be good spam protection as well.
 
I came across this video today. Show's how you can install Ollama 3.2, 1 Billion parameter version locally on a $10 VPS.

It's a tad slow, but would probably work really well for locally analysing messages to decide whether or not they should be put into moderation.
The problem is local LLM models do not have anywhere near the max token window context count needed to process large amounts of data, as they mostly max out at 128/131K token limits. You'd need cloud-based Google Gemini based models with 1 - 2 million token context limits :)

Also, memory usage requirements for local LLM will increase your server requirements, based on my testing with my or-cli.py tool, which supports both Cloud-based LLM models and self-hosted models via Ollama and vLLM.
 
Regarding this pointless exercise, I need to get moving on it because the bill must be taking effect real soon now, if it hasn't already kicked in. Given that, do I need to send them anything, or is it just a question of doing a written audit for your own site, and then storing that? Yes, I am aware of the need to act fast on people reporting complaints, but aside from that. I feel a bit overwhemled by it all.
Me too. On the Ofcom check page, there's a download pdf "Illegal Content Duties Record keeping template". I'm just filling that in and keeping it as a record/risk assessment. Assume that's enough?!

It kicks in on 16th or 17th can't remember. Eg by Sunday or Monday - but I think I read you have three months to get everything in place - you'll need to read to see

 
The way I'm reading it, from the pdf download from Ofcom to complete, is that even if your site is suitable for kids or a general topic, you need to show how it isn't a risk for adults or kids from various threats - eg if a spammer gained access and posted something unsuitable. And you need evidence of what measures are in place.

I'm just putting - good spam filters in place, Cloudflare CASM activated, good moderation. To prevent risks such as that. Evidence I've just put - screenshots of spam settings and Cloudflare settings. However age restriction evidence is the thing at the moment.

I read it that any site a child could access, could pose a risk - hence you need age verification.
 
The problem is local LLM models do not have anywhere near the max token window context count needed to process large amounts of data, as they mostly max out at 128/131K token limits. You'd need cloud-based Google Gemini based models with 1 - 2 million token context limits :)

Also, memory usage requirements for local LLM will increase your server requirements, based on my testing with my or-cli.py tool, which supports both Cloud-based LLM models and self-hosted models via Ollama and vLLM.

That looks like a really cool tool you've been working on.

For the purpose of just analysing single message DMs though, surely you don't need such huge context limits?
 
How about this? Presumably it costs though. Face ID
https://www.yoti.com/business/facial-age-estimation/
FWIW I did contact Yoti, prices start at about £200 a month.
I read it that any site a child could access, could pose a risk - hence you need age verification.
We've mulled over the joys of this catch22 in a few posts earlier in the thread, but basically if you can't prove children are not accessing your site then you have to assume they might be. That will impact your risk assessment, ie you have to consider children using the site, but doesn't mean you have to have age verification, but having it might make life easier!
 
Back
Top Bottom