UK Online Safety Regulations and impact on Forums

Jumping on this. There has been no update from OFCON since Dec.

I am hoping the scope is changed to not apply to communities under x k users.
The legislation has been published, only the guidance is missing - but don’t expect a reduction in scope. Ofcom’s own page says the legislation applies to individuals running sites, and that the size doesn’t matter, you’re still affected. (It’s only how affected you are.)
 
I'm not sure if this has already been mentioned. Although Ofcom has published an interactive tool which tells you if your site falls within the Act, it hasn't yet published a tool that sets out what steps you need to take to fulfil those obligations.

While I won't be shutting down any of my sites, I will in all likelihood temporarily remove the ability to send a pm but take no further action until the regulator has got its act (no pun intended) together.
 
the regulator has got its act (no pun intended) together.
So we’ll be talking about it in October or thereabouts for a first run.

(For comparison, the law I mentioned I’ve been dealing with came into force in November 2023, guidance published February 2024, cue panic meetings in March with “we have this law and we have no idea how to make it work” and we’ve still been having conversations on actually implementing it as recently as December. And, as I said, this is a law their own department isn’t compliant with and has deemed it impossible to be so. They are shocked we figured out how to do it.)
 
Yes but which year? :giggle:
A first run, this year but there will have been enough pushback by operators both small and large, they’ll be looking to publish revised guidance summer 2026. Or they’ll just quietly remove the enforcement parts and make it “best practice”.
 
I am the webmaster for a large bike club based in the UK, we have a very popular forum but this may have to be shutdown due to this new legislation, an 85 page risk assest document and someone has to put thier name down as being the responsible forum owner, I doubt anyone will be willing to do this and I don't have the time to fill in an 85 page document.
Why would a risk assessment need to be 85 pages?
 
"If we decide to open an investigation and find that your service has failed to comply with its duties, we may impose a penalty of up to 10% of qualifying worldwide revenue or £18 million (whichever is the greater) and require remedial action to be taken.17"

I'd better start saving £18million then - good luck with that

One issue I have picked up on is the use of ChatRooms. PM's/Conversations can be monitored to some degree with add-ons that highlight certain words used and flag it up. The same using the profanity filter to censor out certain words/phrases.
However, ChatRooms, especially those that enable Private Rooms will become a big no-no as the risk of abuse is too high.

@Siropu offers a comprehensive ChatRoom but even that does not facilitate flags and censors, and I doubt anyone would want to moderate such 24/7
 
@Siropu offers a comprehensive ChatRoom but even that does not facilitate flags and censors, and I doubt anyone would want to moderate such 24/7
It has a log that can be read and filtered. Generally there is no need to monitor it 24/7.

I see that Discourse Community is adding LLM analysis. i.e. an AI agent knows your rules, analyzes content and reports if there is a rule breach.

Automation + AI​


With the Automation plugin, automatically classify your posts and topics via AI triage. Set automation rules and AI triage will analyze posts, performing actions such as hiding, tagging, flagging NSFW, spam or toxic content, and much more.

They are also adding some report reasons related to the OSA. i.e. users can mark content as illegal.

I can see how that would be very useful.
 
Last edited:
I still cannot find anywhere it says how long your risk assessment has to be. It also makes little sense to say how many pages, it would depend on font size, line height and line spacing.

But I can see that if it becomes an online form like filling out your tax return then anything that takes more then a certain amount of time could become difficult for a lot of people.
 
Last edited:
I still cannot find anywhere it says how long your risk assessment has to be. It also makes little sense to say how many pages, it would friend on font size, line height and line spacing.

But I can see that if it becomes an online form like filling out your tax return then anything that takes more then a certain amount of time could become difficult for a lot of people.

I think the templates and guidelines that @eva2000 has published further up this thread with a few tweaks should be an excellent starting point for most of us.
 
I asked ChatGPT what functionality Xf has that complies with the OSA, what functionality is missing and what should be expanded. I think the answers are quit interesting. It gives an idea of what functionality we need to look into through suggestions or third party providers like cloud services or addon developers. As previously mentioned the OSA remains a moving target with more to come. I do think that most of the missing functionality are good to have as quality of life improvements and would make the use of communities much better for users and moderators.

Missing Functionality:
  1. CSAM Detection and Reporting: Lack of automatic detection and reporting of illegal content such as CSAM. (Available through CloudFlare)
  2. Granular Reporting Categories: Need for more specific reporting categories to handle different violations. (Suggested here)
  3. Advanced Content Filtering: Missing keyword or topic-based filtering at a more granular level. (Suggested here, here and here)
Needed Improvements/Expansions:
  1. Enhanced Reporting Options: Expand reporting categories and provide real-time feedback to users. (Suggested here)
  2. Automated Detection of Harmful Content: Implement AI-based detection systems for harmful content, both in posts and private messages.
  3. Improved Transparency: Provide more visibility into moderation actions and report statuses to users.

Missing Functionality in XenForo Software to Comply with the Online Safety Act


Despite XenForo's existing compliance with many provisions of the Online Safety Act 2023, certain features are missing or require enhancement for full compliance, particularly with respect to automated content moderation, child sexual abuse material (CSAM) detection, and granular reporting options.

1. Child Sexual Abuse Material (CSAM) Detection and Reporting
  • Missing Functionality: XenForo does not include automated tools to detect illegal content such as CSAM.
    • Non-Compliance: Under Section 18 of the Online Safety Act, platforms must take steps to detect and report illegal content (including CSAM).
Details:
  • XenForo needs to integrate an automatic content detection system that flags CSAM or other illegal materials.
  • While moderators can manually identify and take action against harmful content, automated systems would better ensure that illegal content is detected promptly.

2. Granular Reporting Options for Specific Violations
  • Missing Functionality: XenForo's reporting system could be improved by offering more granular optionsfor users to specify the type of violation (e.g., hate speech, harassment, bullying, illegal content, etc.).
    • Non-Compliance: Section 15 of the Online Safety Act requires platforms to empower users with specific and targeted tools to report harmful content.
Details:
  • XenForo should consider implementing more detailed reporting categories (e.g., harassment, hate speech, misinformation) to allow users to specify the nature of their report more clearly.
  • A more visible and accessible reporting button for all types of content (posts, threads, private messages) should be added to ensure ease of reporting for users.
  • Additionally, a real-time escalation system to highlight urgent cases (e.g., threats of violence or CSAM) should be introduced to speed up the response process.

3. More Advanced Content Filtering and User Control
  • Missing Functionality: XenForo’s current filtering options focus on filtering content by forums and categories. However, keyword filtering and more granular content preferences are currently not available at the level needed for compliance with Section 19 of the Act.
    • Non-Compliance: Section 19 requires platforms to allow users to control and filter content more effectively.
Details:
  • XenForo could benefit from a keyword filtering feature that allows users to filter posts containing specific words or phrases, similar to a blacklist function for harmful topics (e.g., hate speech, abusive language, or other sensitive subjects).
  • Additional settings should be available to block specific content types (e.g., images, videos, threads) based on user preferences.
  • Offering content categorization tools that help users avoid specific topics entirely (such as violent content, NSFW content, etc.) would further enhance user control.

Existing Functionality that Should Be Expanded or Improved to Comply with the Online Safety Act


Several features in XenForo can be expanded or improved to provide more control over content, improve reporting processes, and enhance user safety in line with the Online Safety Act 2023.

1. Enhanced Detection of Harmful Content
  • Improvement: XenForo should introduce more automated moderation tools to help detect abusive behavior, spam, and harmful content such as hate speech or harassment, both in public posts and private messages.
Details:
  • Automated tools, such as AI-based moderation systems, could scan text for abusive language or patterns of harassment.
  • Sentiment analysis tools could be integrated to flag abusive or harmful content that might not be immediately obvious to human moderators.
  • Regularly updating the moderation systems to address emerging issues in online safety, including deep fake detection or new forms of abuse, could help maintain compliance.

3. Improved Transparency on Moderation Actions
  • Improvement: XenForo could enhance transparency around how reported content is handled and provide users with more detailed feedback on actions taken against reported content.
Details:
  • XenForo could also consider a public moderation log that reports types of moderation actions taken across the platform, such as banning users or removing harmful content.

2. Expand Reporting Options for Specific Violations
  • Improvement: XenForo’s reporting system should be expanded with more specific options for reporting harmful content, including categories for abuse, harassment, illegal content, and others.
Details:
  • Adding more reporting categories will ensure users can specify the exact nature of the violation, making it easier for moderators to address issues.
  • Providing real-time feedback to users after they submit reports could improve transparency and user trust in the system.

3. Transparency and Reporting on Action Taken
  • Expansion/Improvement: XenForo could improve transparencyregarding how reports are handled, and what actions are taken after content is reported.
    • Compliance: Under Section 18 of the Act, platforms are required to enforce content moderation policies and provide transparency regarding the actions taken in response to user reports.
Details:
  • XenForo could create a dashboard with notification system that informs users about the status of their reports (e.g., whether content was removed, whether the user was warned, etc.).
  • Additionally, creating a public transparency report (or an accessible summary of actions taken) would provide insight into how the platform handles rule violations.

4. Enhanced User Customization and Filtering Tools
  • Expansion/Improvement: XenForo currently offers some customization features, such as ignoring users, but further expansion of content filtering and customization toolswould help users have more control over their experience, particularly in avoiding harmful or sensitive content.
    • Compliance: Section 19 of the Act requires platforms to allow users to control their content experience.
Details:
  • XenForo should introduce keyword filtering that allows users to block content that contains specific words or phrases.
  • Platforms should also enable content preference settings for users to opt-out of certain types of discussions, media, or topics (e.g., sensitive subjects like violence, abuse, etc.).
  • Providing more user control over content exposure will align with the Act's goal of increasing user safety and comfort online.

Existing XenForo Functionality that Complies with the Online Safety Act 2023

XenForo forum software already provides a range of features that align with many of the core requirements of the Online Safety Act 2023. Below is an overview of the existing functionality within XenForo that complies with the Act's requirements:

1. User Reporting System
  • Existing Functionality: XenForo offers a built-in report systemthat allows users to report posts, threads, or user behavior that violate forum rules. Reports are sent to moderators or administrators, who can then take appropriate action.
    • Compliance: This feature directly addresses Section 14 of the Online Safety Act, which requires platforms to provide mechanisms for users to report harmful or rule-violating content.
Details:
  • Users can report posts, threads, and private messages.
  • Moderators and administrators are notified about the reports and can take action, such as removing content or issuing warnings to users.
  • XenForo provides a streamlined process to report content and users, making it accessible for users to flag violations of community guidelines.
2. Content Moderation Tools
  • Existing Functionality: XenForo includes robust moderation toolsthat allow moderators to remove, edit, or move content, as well as issue warnings to users who violate the rules.
    • Compliance: This satisfies Section 14 of the Act, which mandates platforms to have systems in place to moderate harmful content.
Details:
  • Automated features like spam detection and automatic flagging help identify potentially harmful content.
  • Moderators can take action on reported content, block users, and suspend or ban accounts if needed.
  • XenForo allows moderators to set specific permissions and access rights, improving content moderation for different user levels.
3. Blocking and Muting Features
  • Existing Functionality: XenForo includes user features like ignore and blockto allow users to mute or block other users’ content. This feature helps protect users from harassment or unwanted interactions.
    • Compliance: This is in line with Section 15 of the Act, which requires platforms to offer blocking and muting features.
Details:
  • The "Ignore" feature allows users to block content from specific users (e.g., ignore posts, threads, and private messages from a particular user).
  • Blocked users' content is hidden from the person who has ignored them, offering a mechanism to prevent abusive interactions.
..
 
Last edited:
Enhanced Reporting Options: Expand reporting categories and provide real-time feedback to users. (Suggested here)
I'd certainly like to see some enhancements to the reporting system without going crazy and turning it into a full blown ticketing system!

Allowing those reporting to select a report category from a (configured) list would be a useful start. Even nicer if access to the reports in different categories could be configured for different staff. Having some staff say handling spam and others handling more nuanced reports. Much as you can already define access to post / conversation reports I can imagine would be useful for some sites.

I'd also love it if we were able to set a threshold for reported content above which some defined action was automatically taken if say the report was un-assigned. So for instance a post is reported more than 'n' times and the post is then placed into moderation for instance. Granted a coarse tool, but I'd have thought useful for sites with just a few moderation staff to cover off those "overnight periods". I would imagine moderation, soft delete, wrapped in spoiler tags, assigned to a member of staff are the most useful potential actions that might be wanted.

It would be nice to potentially generate some kind of synopsis of reports based on the categories, "time open", identifying staff that have commented on reports, resolved, rejected, etc reports and so forth.

With regards to user's visibility of reports I suppose it may be nice if they were able to see as a minimum a list of reports they have initiated and the response status and note.

I suspect that any additional tools will need to be commissioned however, still maybe that is a cost that can be shared between interested parties. I had been wondering about contacting @Xon regarding enhancing the (awesome) existing Report Improvements add-on and what the costs may be to add (if possible) as a minimum the features mentioned above. The existing Content Ratings add-on (which we don't presently use) would appear to essentially have the trigger type actions suggested for reporting, so hopefully some of the groundwork might be already done. [EDIT - I also note that Report Centre Essentials covers off the reporting functionality within the existing limitations]

@eva2000 Just wanted to say thank you for putting in the effort to create the template risk assessment, that is a huge help in working though the rules. Very much appreciated.
 
Last edited:
Back
Top Bottom