UK Online Safety Regulations and impact on Forums

Did you try the digital toolkit?
I must admit I'm getting confused now.

I just did the risk assessment questionnaire thing, that told me I needed to download the template file on how I would be mitigating things.

I remember someone posted an example of this in this thread.

But I now see a link from Eva with a massive GitHub document that seems it should be posted in the sites terms and conditions?
 
I must admit I'm getting confused now.

I just did the risk assessment questionnaire thing, that told me I needed to download the template file on how I would be mitigating things.

I remember someone posted an example of this in this thread.

But I now see a link from Eva with a massive GitHub document that seems it should be posted in the sites terms and conditions?
The online questionnaire is the digital toolkit with the odt download you fill in so sounds like you’ve done everything.
 
It’s this one. You have to go through the four sections. After the tick box bit that asks about risk factors there is info comes up about your risks and what’s needed under low medium or high. After that bit I had to go back and change my risk tick boxes because I’d just giesssed at it. Then back again to the third section - risk info didn’t change. Last section tells you what measures you need. Which wasn’t a lot but the earlier risk info suggested things you might need to do if you want to be low risk

The odt download file is your record of your risk assessment., Took me ages filling that in. Then going back to do it again as I didn’t understand the last bit but in section 4 of the toolkit - the recommendations, you need to copy and paste those and the smaller greyed out bits under them (showing which part of the act to apply to) in the last bit of the odt form.

I found it like a bad exam paper or school exercise.

 
Not sure. The only things that came up to do after the toolkit were person named. TOS and reporting organisations. Along the way it gave an idea of what you needed to do if you wanted to be classed as low risk, I thought it was just category 1 and 2n that had to report back to Ofcom, or sites over 70,000 members
I know of a few that go that far.
Good that you're keeping an eye on this too.
If this was me having to do this, i'd create another group and make the rules strict in that group.
To cover your bum as a forum owner.
If you have children posting, create an u/18's group and make the rules comply with the regulations with the swear filter in place.
 
There's a bit here about if it's medium risk

"If your service is medium or high risk for one kind of illegal harm, it is a ‘single-risk’ service, and more measures may apply. If your service is medium or high risk for two or more kinds of illegal harm, it is a ‘multi-risk’ service, and further measures may apply. The Codes of Practice indicate which recommended measures apply to each type of service."


So I don't want to be a medium risk in one category! Basically if a site is accessible to children, and doesn't have every single hyperlink checked - either manually, or by some automated means, then it comes under medium risk for the hyperlinks section. In the basic risk assessment.

The only way to make it low risk is to have some method of every single hyperlink being checked. Either by pre-moderation or some kind of automated scanning.

Under the child risk assessment section it's similar. The only way to be low risk is everything is pre-moderated - that's what it seems to say. Presumably it would accept automated constant moderation.

Hence it's either age verification (and having everything low risk) or all links pre-moderated, or some automated scanning of links.'

That is my understanding so far. And reading all this stuff is spoiling my life!
 
Asked Chat GPT what the additional measurs where and it said

The specific extra measures for single-risk services are detailed in Ofcom's Codes of Practice. These measures are tailored to the identified risks and may include:


  • Enhanced Content Moderation: Implementing more robust systems to detect and remove harmful content.
  • User Empowerment Tools: Providing users with better controls to manage their experience, such as blocking or reporting harmful content.www.ofcom.org.uk
  • Transparency Reports: Regularly publishing reports on content moderation activities and effectiveness.
  • Algorithmic Safeguards: Adjusting recommendation systems to minimize the spread of harmful content.
  • Age Verification: Implementing measures to ensure that users are appropriately age-gated, especially concerning harmful content.

The irony of this is - if it's medium/single risk you need enhanced content moderation. But if you have enhanced content moderation you become low risk. In other words, they want you to have enhanced content moderation (if you don't have age verification). Unless you can show the site definitely isn't one that children would access.

I certainly wouldn't be wanting to regularly publish transparency reports. So I'd like to do whatever is needed to be low risk! They don't care how small the site is - as long as its less than 70,000. That doesn't seem to come into it. They think small size doesn't make a site safer.
 
Terms of service: I'm having a slight problem with the fact that the Complaints procedure contradicts the standard terms of service. Standard TOS says something like the right to remove or ban etc without explanation.

But under OSA we have to have a complaints procedure about content removal.
 
Our main forum has been running since 2004, with 165,930 members, some of those will be bots/spammers/dead accounts of course, and it has been in decline for a long time now, but this law was the final nail. We set cloudflare to block UK access and are transferring ownership of it to a US member. I have given up. I had plans for other sites that were going to have forums, those plans abandoned too.
I tried reading all documents on the Ofcom site, the tool etc but I just feel its too much, its an Anime forum, we don't stand a chance.

Best of luck to those of you who are continuing.
 
Our main forum has been running since 2004, with 165,930 members, some of those will be bots/spammers/dead accounts of course, and it has been in decline for a long time now, but this law was the final nail. We set cloudflare to block UK access and are transferring ownership of it to a US member. I have given up. I had plans for other sites that were going to have forums, those plans abandoned too.
I tried reading all documents on the Ofcom site, the tool etc but I just feel its too much, its an Anime forum, we don't stand a chance.

Best of luck to those of you who are continuing.
Sorry to hear that. "Too much" describes it well. The Act does make it onerous for anyone who wants to run a forum IMO.
 
Ok looking at the Child Risk Assessment stuff again - I think it's going to be really hard to comply. Have extracted a few things from the CRA Guidance.

You have to assess the risk factors in Appendix A1 near the bottom of the linked page.

Services with direct messaging, or that allow posting of photos or videos, re-posting or sharing content, searching for user generated content, tagging, services with features that increase engagement (eg "likes" and comments, alerts and notifications), come under risk factors.

You also need to assess risk factors for the different age groups

0-5, 6-9, 10-12, 13-15, 16-17

Assessing how optimising revenue for the site can increase risks to children (ads, recommender systems etc)

"
Your commercial profile may increase the risk that children encounter different kinds of CHC.
For example, we would expect you to consider:
- How low capacity or early-stage businesses may pose heightened risk if they have
more limited technical skills and financial resources to introduce effective risk
management. For instance, they may have insufficient resources to adopt
technically advanced automated content moderation processes, or to hire a large
number of paid-for moderators."

(So that's small sites too risky then is it?!)

You then have to make a note of all your risk factors and use these as part of your risk assessment in Step 2.

Under the risk levels table, you then have to assess - your risk factors. As well as the impact of the specific harms on a child and whether you assess the impact as High Moderate or Low. So the impact of a child seeing any of the harms, forms part of determing whether it's low, medium or high risk. Eg can you say the harm would be low impact?

So below are the tables for Medium and Low Risk. Noting that you can't just "think" you're low risk - you have to assess your level risk by looking through the "risk factors" mentioned above.

Also noting the little point 28 under medium risk

"
28 Some risk factors, while distinct, may have a similar effect on your service (such as a situation where your
service allows users to post videos, and is also a video-sharing service). In these situations, you may choose to
consider the risk factors together. Separately, some distinct risk factors may combine or intersect and increase
risk, as noted in the Risk Profiles (such as livestreaming intersecting with the group messaging and
commenting functionalities of a service). "

But just looking at Low Risk for now .....

To assess low risk, you have to assess that are no or few of the "risk factors" that apply in Appendix 1 (eg personal profiles, direct messaging, posting photos and videos, advertising, user encouragement ......)

Or that there are comprehensive systems in place to prevent these risks (eg pre-moderation).

Note: I've established that even AI scanning of links doesn't count as pre-moderation - it is post-moderation. Pre-moderation is either a) all links go for manual moderation or b) scanning of links BEFORE they are actually posted.


Which to me suggests most people would be MEDIUM risk. As I wouldn't want to manually pre-moderate all links. Half the posts would end up being manually moderated then!

What it doesn't tell us yet is - if you are medium risk for a CRA - what additional measures you would also need to take.

Moderate Risk 1.webpModerate Risk 2.webp

Moderate Impact 1.webp
Moderate Impact 2.webp

Low Risk 1.webp

Low Risk 2.webp



 
On the other hand it's all still a bit vague in places too. So if you could say that there are comprehensive process in place (even if it's not quite pre-moderated level). And there is no evidence of that kind of content being found on your service (ie nothing ever before) and there is no evidence that content of this type is impacting children or users. Then maybe you COULD say you were low risk.

But the criteria for medium risk and low risk seem to contradict each other.

But I think I would go with low risk based on - never had an issue before. Comprehensive processes in place. On the other hand, it's a site that could attract children which makes it a higher risk :(

They seem very focused on preventing users posting things, when in my case (and others I imagine) the users aren't an issue - the only issue is a rogue spammer.
 
Last edited:
I wouldn't mind so much if there was a quite long and intensive online form that you could fill-in in, and save as you went along. HMRC can do that for self assessment and it works very well. But just saying: do a risk assessment when you are not a professional risk assessor is really unhelpful for small forums with no admin (office) staff beyond a team of volunteer moderators.
 
I wouldn't mind so much if there was a quite long and intensive online form that you could fill-in in, and save as you went along. HMRC can do that for self assessment and it works very well. But just saying: do a risk assessment when you are not a professional risk assessor is really unhelpful for small forums with no admin (office) staff beyond a team of volunteer moderators.
Yep. And as @sport_billy said - they haven't even released an online toolkit thing yet.

I have now worked out I could class the site as low risk for a Child Risk Assessment - looking at the criteria again (no previous incidence of such material on the site), but not for the basic risk assessment :rolleyes: On the other hand there are individual specifics I could say make it low risk - like it's a niche hobby forum with a defined user group that isn't intended for children - but then I don't have age verification so there could be child users. But they do have some contradictory stuff in all the blurb. You try to follow everything and they keep leading you down blind alleys. But within that yes, surely there is scope for making a judgement call of your own depending on your site usage, history and topic?

When I first started looking into this I thought - the only risk is from spammers - not weirdos.

Incidentally a question. When someone reports a post - can you see which member has reported it? I think I've only ever had a post reported once before!

What I really didn't like in that Child Risk assessment stuff was the bit that said small sites could be risky due to lack of technical expertise and no paid moderators. It may be the case that a forum owner lacks technical expertise and doesn't have a team of in-house developers, but that doesn't mean we're stupid! Or don't run a site well. The attitude really annoys me.

Are we supposed to pass a personality test and psychological assessment too? To prove we're worthy of running a site?!!
 
Last edited:
When I first started looking into this I thought - the only risk is from spammers - not weirdos.

Spammers could be an issue, there was a spate of spammers a few years back that were advertising online brothels, so a lot of NSFW images. On my forum they didn't get past the approval queue, so it's really important if you have measures whereby no spammer ever gets as fast as making a public post then that's a big plus. I have caught so many purely from the registration form questions What instrument do you play? and What are your reasons for joining our forum? All legit users give a good reason with, spammers generally put some AI garbage so we know - even if the post is a generic Hi everyone glad to be here, those users will have posts moderated until they post something to do with the forum topic that shows they are genuine.
 
Spammers could be an issue, there was a spate of spammers a few years back that were advertising online brothels, so a lot of NSFW images. On my forum they didn't get past the approval queue, so it's really important if you have measures whereby no spammer ever gets as fast as making a public post then that's a big plus. I have caught so many purely from the registration form questions What instrument do you play? and What are your reasons for joining our forum? All legit users give a good reason with, spammers generally put some AI garbage so we know - even if the post is a generic Hi everyone glad to be here, those users will have posts moderated until they post something to do with the forum topic that shows they are genuine.
I wasn't worried about spammers - had that sorted :-) Before I had better spam protection, rarely would one get past the approval queue. Maybe a couple of times a year. And then it was just some kind of advertising link, or once, a long religious piece. Never had any nasty stuff.
 
I think Ofcom just want all sites that aren't intended for kids - to have age verification, and so are making the CRA a deliberately tortuous process. Noting they mention age verification as an "extra measure" as well. And expecting us all to pay commercial companies for it.
 
Back
Top Bottom