UK Online Safety Regulations and impact on Forums

So I've started working on my Child Risk Assessment. I have made it very very long, detailed, explanatory and informative. Including historical background, my own background - and rather gone to town a bit. If anyone ever wants to see it, they will have almost as much reading to do as I have done to do the risk assessment 🤣

Seriously though, it's not too bad once you get started and doesn't need to be that long. And when you know you've mitigated for everything.

Also, and I think this has been mentioned before, there is no actual mention of exif data in the risks (I think - haven't finished yet). Even though ideally I'd like to remove exif data from photos as a security option.

I'm still not quite clear on user profiles. I have never had profile posting turned on, but I guess age and date of birth show. Unless I can turn that option off somewhere?
 
Last edited:
It could be worse, one of the forums we’re trying to make compliant is about mental illness. People often talk about self harm, in a way that is safe and legal, but occasionally people do over step on the share front.

Direct messages are also massively used.

That is a minefield.
I guess that would be at least medium risk then? The AI illegal harms scanning could be useful - but then with that topic it could just flag everything possibly ...........I guess it just needs experienced moderators.
 
It could be worse, one of the forums we’re trying to make compliant is about mental illness. People often talk about self harm, in a way that is safe and legal, but occasionally people do over step on the share front.

Direct messages are also massively used.

That is a minefield.
Rather than direct messages, could you set up sub-forums for specific groups maybe? Which would be moderated?
 
  • Like
Reactions: Lee
So I've started working on my Child Risk Assessment. I have made it very very long, detailed, explanatory and informative. Including historical background, my own background - and rather gone to town a bit. If anyone ever wants to see it, they will have almost as much reading to do as I have done to do the risk assessment 🤣

Seriously though, it's not too bad once you get started and doesn't need to be that long. And when you know you've mitigated for everything.

Also, and I think this has been mentioned before, there is no actual mention of exif data in the risks (I think - haven't finished yet). Even though ideally I'd like to remove exif data from photos as a security option.

I'm still not quite clear on user profiles. I have never had profile posting turned on, but I guess age and date of birth show. Unless I can turn that option off somewhere?
Age and dob users can toggle on or off. No way of stopping them….
You can default it as off but they can put it back on, which is frustrating to say the least
 
I personally think this is being blown way out of proportion.

From someone living in the UK, you simply have to understand that the entire British state is employed top to bottom with incompetant busybodies. This Act from the so-called "Independant" regulator is simply a gesture because of the Labour government's utter failure to prevent, and in many cases covering up or even participation in child sexual exploitation.

To fall under the regulations, you need to either specifically target the UK (I doubt any of you do this, encouraging users from many countries to sign up) or have a significant number of UK users.

What is a significant number?

Well, the regulation doesn't specify or even intend to specify.

The Act does not set out how many UK users is considered “significant”. You should be able to explain your judgement, especially if you think you do not have a significant number of UK users.

No matter how big my forum gets, I argue that I simply do not have a significant number. 10k monthly active UK users? Nothing compared to tens of millions large platforms like Twitter, TikTok, Discord etc. get. These companies are the ones Ofcom seek to extort money out of.

With sensible moderation of your forum and use of XFs existing features, I can assure you that you won't be receiving a letter. If you're based in America, you could just forget it about it entirely until January 2029.
 
What happens in January 2029? Yes, a significant number is vague. Although 10,000 sounds a lot! It's things like "You should be able to explain..." type wording throughout, that really grates on me. It's like being at school again.
 
Age and dob users can toggle on or off. No way of stopping them….
You can default it as off but they can put it back on, which is frustrating to say the least
That does seem a bit silly. I think there's an option in ACP to select day and month only rather than date of birth. Would that mean they couldn't put more than a day or month?

In my CRA, (which has taken about 36 hours to do in total, over 2 days :rolleyes: ) I have this for user profiles (I think the risk factor is aimed at large social media sites). I think the Genesis addon I got mitigates for just about everyting as "enhanced moderation", but general moderation would cover it as well.

"
2. User Identification factors

2a Services with user profiles

• Risk factor: User profiles

• Key kinds of CHC*: Your service is likely to have an increased risk of harm related to

eating disorder content

If your service allows users to create a user profile that displays identifying information that

can be viewed by others (e.g. images, usernames, age), we expect you to take account of the

risks that can arise from this. For example, our evidence indicates that children can see users

(e.g. ‘influencers’) with a significant number of user connections (see 3a) displayed on their

user profiles as trusted sources of information, creating a heightened risk of harm if eating

disorder content is shared from these accounts. Children can also display pro-eating disorder

information on their user profile, such as in their username, which can contribute to them

receiving recommendations for eating disorder content and facilitate the creation of

networks with other pro-eating disorder users.




2a Comment/Mitigations

The topic of the site is a narrow topic about Hamsters and pets. There would not be discussion of eating disorders on the forum. Such content, if it did appear, would automatically be removed due to enhanced moderation. User profiles have limited use and options, and are primarily there so users can manage their privacy settings and notification preferences or look up their own previous posts. There is no option for others to post on profiles. Members can follow other members, but this isn’t commonly done on this site. The forum topics and posts are niche. Any inappropriate username would be changed. Avatars are automatically just an initial from the username. User uploaded avatars are 99.9% a photo of a hamster. Occasionally there has been an avatar that is not a photo of a hamster, but nothing sinister. There has never been a problem with avatars, due to the user demographic. There have been no previous issues of such content or inappropriate usernames. There is enhanced moderation in place (mentioned below). This automatically sends illegal or harmful content, for words, images or links for manual moderation. Low risk."

https://www.ofcom.org.uk/siteassets...ssessment-guidance-risk-profiles.pdf?v=368062
 
Last edited:
If this is helpful to anyone, the way I've done the CRA is write a "Background" piece at the top explaining various things about the site.

Then a Heading Low Risk

Then from Table 4.4 on pages 35 and 36, copied and pasted/adapted the relevant "low likelihood" and "low impact" wording from the table

Low Likelihood:

There is no evidence that this kind of content has been or is being encountered on this service.

Additionally, there are comprehensive systems and processes in place to limit exposure of children to any potential, but unlikely, future content. (Enhanced content moderation).

Low Impact

There is no evidence that this kind of content is impacting children or users of this site, as there has been no such content and prevention of it is mitigated for.


Then from Figure A1.1 on pages 58 to 59, copied and pasted the numbers and headings that apply to the site with Yes or No in front of them.

Eg
1. Is my service any of the following service types?
a) No
b) No
c) No
d) No
a) Yes (Discussion forum and chat room services)
b) No

(confusingly it has a and b twice under number 1)

2. Does my service have any of the following functionalities related to how users identify
themselves to one another?
a) Yes
b) No


And so on.

Then from table A1.1 on page 60

Copied and pasted the relevant parts that had a "Yes" above.

Eg

1c Discussion forums and Chat Room Services
• Risk factor: Discussion forums and chat room services
• Key kinds of CHC*: Your service is likely to have an increased risk of harm related to
eating disorder content as well as suicide and self-harm content
If your service is a discussion forum or chat room, you should consider how your service may
be used to discuss and share CHC in a setting that is typically visible to the public. For
example, our evidence shows that these services can act as spaces where suicide and self-
harm is encouraged and eating disorder content is shared among dedicated communities.

Then typed my response/comments and mitigations underneath that, as per the example in the previous post for 2a.

After completing all of that, then wrote a piece about all four "General Risk Factors" on pages 65 to 66 by copying and pasting the Risk Factor heading only, and writing a short piece about each. "Use base age" was quite a challenge. But mentioned that under 5's were unlikely to register as registration requires an email address.

In all my responses throughout I just referred to "mitigations in place (see below)"

Then at the end listed/wrote a piece about all the mitigations in place: Reporting system, Crowd Moderation, Word censors, Report generates email to moderator, Spam prevention measures, how content moderation works (the AI content moderation due to lack of moderators). Cloudflare CSAM protection.

I do think the AI content moderation made this easier for me to show mitigation for everything automatically being removed if it appeared.

I read the rest, but I'm assuming the above is "suitable and sufficient" for a CRA.
 
Last edited:
This Act from the so-called "Independant" regulator is simply a gesture because of the Labour government's utter failure
But didn’t the act originate under a Conservative govt?
What is a significant number?

Yes, a significant number is vague
Indeed given their own definition of a small organisation. I can’t remember or find the figures now but wasn’t a small platform under 7m users of something?
 
I'm also considering setting the age limit as 16 rather than 18. I can't see it makes a lot of difference if there's no age verification software, and means the site wouldn't exclude older teenagers who have their own pets. Not that we had any that posted. I suppose technically though, that would be seen to be encouraging children to join. 16 and 17 year olds are still children.

Still thinking about this. Probably best to set it to 18.
 
You can choose either moderation or soft-delete. I'd love a more advanced system, but then again there are a few changes I'd make to reports. Here are the options for Crowd Moderation:

View attachment 322050
So in my case the permissions shown are for users that have been on the site a good long time (probably 15+ years, I forget) so if they shout I pay attention and even allow them to "shout" twice (ie if they report a post twice) - which would actually be enough to have something pulled (2x 3 points > 4 point threshold). Younger users are a little less heavily weighted.
Just going back to this. If I set all users to 4 moderation points. But 2 report limits. Would this mean any report posted would automatically go for moderation. But users can't report more than two posts in any given time period (would it be 24 hours). ie would it stop a rogue user removing masses of posts by sending them all to moderation, and messing up the forum?

That's the main thing I'd want to achieve

1) Anything reported goes straight for moderation
2) Limit how many posts a user can report within a given time period.
 
I've had to resort to using Google's AI Overview to try and ensure compliance with the act so take this for what it's worth. With regard to the 70,000 user threshold (note my emboldened text);

The 70,000 user threshold, previously used in the UK Online Safety Act for determining which services were in scope of the regulation, has been removed. This means that all file-storage and file-sharing services, regardless of their size, are now expected to use automated tools like hash-matching and URL detection to find child sex abuse material. Additionally, all user-to-user services must now employ a content moderation function to review and assess suspected illegal content.

I would very much like to be shown this is false information.
 
Just to confuse the issue, Chat GPT gives various numbers! Clear as mud

Under the UK's Online Safety Act, the definition of a "small service" can vary depending on the context and specific guidance provided by Ofcom.

Ofcom generally classifies services with fewer than 49 employees as "small" services. However, when considering the number of UK users, a "small service" is typically one with fewer than 7 million average monthly UK users. Some measures may apply to services with more than 700,000 monthly active UK users, such as using hash-matching and URL detection to detect [CSAM]
 
OK I was wrong. But I think I could still claim < 3000 active users as insignificant.
Apologies it seems 7 million is right (possibly). See above. But I have definitely seen 70,000 somewhere. Maybe in the digital toolkit.

Haven't they heard of extra small, small, medium, large and extra large? 🤣
 
This is how it describes it in the digital toolkit

"File Storage and File Sharing Service:

Services whose primary functionalities involve enabling users to store digital content and share access to that content through links."

The primary function of a forum is not to enable users to store digital content etc. I see that as places like Googledrive.
 
Last edited:
Back
Top Bottom