UK Online Safety Regulations and impact on Forums

2000 posts a day, I'll not be paying for AI checks on those! Why pay AI when I've got thousands of members quick to report anything that breaks the site's rules. Community moderation of public posts is so much better than AI. DM's not so much, AI could be employed there.
+ AI payment based on the message characters, not the message number. So payment can be more. The key concern is how strict these rules will be implemented.


This is the tool for token calculation.
 
I have been working with a developer who has come up with a great app which sends all posts annon via OpenAI and gets checked for all the harmful content, anything flagged is then taken down until looked at by admin.
Will you be packaging that up for licence / sale / etc ?

I run a UK footy forum a bit like yours - so keen to see whats what.
 
If there was a plugin or API service that offered to block

1. CSAM
2. Any nudity

That would be good. I'm for a scatter gun effect - block it all.

If some dudes bare arms in a photo gets false flagged to moderators from time to time - who cares?
 
There are several services for CSAM based on fuzzy hashing. However access to them is handled manually with applicants making their case, I don't think the are geared up for the smaller companies. That said I have not yet enquired. The only more immediately accessible service was via Cloudflare (if you have an account), where they would scan your site (assume it has to sit behind them) for you looking for CSAM - you can probably dig up their blog article about it or I did post a link somewhere earlier in this thread. That however would be a retrospective solution (which may be enough depending on your risk assessment)

For more run of the mill nudity any of the AI image engines can do that, eg AWS Rekognition for instance and you can act based off the information it returns. I can't speak for the pass/fail rates for identifying things. I expect it's something you may need to tune to your typical user content. I'd been idly wondering about having a play with some of these systems when time permits.

Obviously for a lovely seamless experience you'd need to write or commission someone to write a XenForo Add-on to do the interfacing.

That would be good. I'm for a scatter gun effect - block it all.
You could just block media uploads and media/img BBCodes if those are not a key part of your own forum!!
 

UK willing to renegotiate online harm laws to avoid Trump tariffs​

Starmer may be prepared to alter social media safety Act to accommodate US president and his ‘tech bros’ to secure favourable trade deal

The Government is willing to rework its Online Safety Actin order to swerve tariffs from Donald Trump’s administration.

The law, which regulates online speech, is thought to be heavily disliked by the president’s administration because it can levy massive fines on US tech companies.

Downing Street is willing to renegotiate elements of the Act in order to strike a trade deal, should it be raised by the US, The Telegraph understands.

The law has been heavily criticised by free speech advocates and economists, who argue its broad provisions to tackle harmful online content could lead to excessive censorship and deter investment from American tech giants.

 
because it can levy massive fines on US tech companies
Why do I get the feeling the "massive fines" on UK companies bit wont be worried about ;)


As an aside I noticed a real world example - Mumsnet (~£8M turnover UK parenting forum) has a run in with having CSAM material uploaded to several threads last week. There is a thread discussing (very lightly) it if you are curious to read some reactions. Interesting given they make about £2M in clean profit a year they don't have any overnight (UK) moderators and rely on some volunteers without full access to the moderation tools - I'd have expected paid staff in Australia and someone on the West coast of the USA to be around to cover the out of hours (quite possibly just the one person in each timezone doing a 9-5) even if you additionally used volunteers. So response time was heavily criticised (eg to disabling image uploads). Another point that was made is that by viewing the material under UK law you are committing a crime (the crime of "making"), so does rather question how anything might legally be moderated (except perhaps by automated AI), although the reality is that it's unlikely a prosecution would be brought. The main take home as ever would seem to be clear communication (or lack thereof) from those running the site. So a lesson in "having a plan" should the worst happen and clearly letting customers know what's happening.
 

UK willing to renegotiate online harm laws to avoid Trump tariffs​

Starmer may be prepared to alter social media safety Act to accommodate US president and his ‘tech bros’ to secure favourable trade deal

The Government is willing to rework its Online Safety Actin order to swerve tariffs from Donald Trump’s administration.

The law, which regulates online speech, is thought to be heavily disliked by the president’s administration because it can levy massive fines on US tech companies.

Downing Street is willing to renegotiate elements of the Act in order to strike a trade deal, should it be raised by the US, The Telegraph understands.

The law has been heavily criticised by free speech advocates and economists, who argue its broad provisions to tackle harmful online content could lead to excessive censorship and deter investment from American tech giants.

Searching for content not hidden behind a paywall revealed this article posted less than an hour ago.

 
I have a forum with over 100,000 members. Most of these are from many years ago because the forum has been run for about 25 years now. They are dormant accounts. And then there are all the spam accounts. To age verify all these accounts from all over the world would cost a fortune and bankrupt me. And also impossible. How would I age check a member from say China or Russia?

What solution does that leave? To ban all these members. And ban all the existing members because to age check them with a service would also be financially prohibitive. What a depressing state of affairs.

Rarely can you count on 1/100th, if that much, of Russian and Chinese registrations to be actual people. I've blocked out the entire countries, and it had zero impact on forum activity other than less spam.
 
The law, which regulates online speech, is thought to be heavily disliked by the president’s administration because it can levy massive fines on US tech companies.

That might be ironic if they are against another country for its right to have its own interpretation of free speech. Not that I support the act in the way seems to be implemented, but I do support the intent especially to protect children from harm and various other identified harms.

Another point that was made is that by viewing the material under UK law you are committing a crime (the crime of "making"), so does rather question how anything might legally be moderated (except perhaps by automated AI), although the reality is that it's unlikely a prosecution would be brought
That’s interesting and likely prosecution or not, the law is the law and should apply evenly. The govt should not be considered exempt.
 
so predictable.....its all about money like ok so they decided they just really really care about kids and something must be done, then they suddenly remembered trade deals are more sacred? give me an effing break, why are we forced to live by rules that are corrupt from the first thought...

its not much different than saying my friend bill is against pedos.....send him 100$ by the end of the month. we made it a law so you have to and if you dont you obviously dont care and need to be punished.....dont even try to question bills integrity or where his money is invested, thats irrelevant. dont question how casually these things are dealt with in society either because the internet is the only place there is crime so thats where we need to deal with it.

piss off just all the way off...how they can even think they can get away with something like this just shows they have too much power and need it taken away ;P
 
Why do I get the feeling the "massive fines" on UK companies bit wont be worried about ;)


As an aside I noticed a real world example - Mumsnet (~£8M turnover UK parenting forum) has a run in with having CSAM material uploaded to several threads last week. There is a thread discussing (very lightly) it if you are curious to read some reactions. Interesting given they make about £2M in clean profit a year they don't have any overnight (UK) moderators and rely on some volunteers without full access to the moderation tools - I'd have expected paid staff in Australia and someone on the West coast of the USA to be around to cover the out of hours (quite possibly just the one person in each timezone doing a 9-5) even if you additionally used volunteers. So response time was heavily criticised (eg to disabling image uploads). Another point that was made is that by viewing the material under UK law you are committing a crime (the crime of "making"), so does rather question how anything might legally be moderated (except perhaps by automated AI), although the reality is that it's unlikely a prosecution would be brought. The main take home as ever would seem to be clear communication (or lack thereof) from those running the site. So a lesson in "having a plan" should the worst happen and clearly letting customers know what's happening.
Interesting reading. It appears they may allow new posters to post photos and links and then moderate reactively, that’s just stupid.

I’ve always thought Mumsnet was a bit of an angry cesspit, and surprised to read moderation overnight was so light. They could cut out most of the crap by charging a small fee. It doesn’t have to be much. In my experience the worst trolls and spammers won’t cough up even $1 a year, presumably because credit cards identify them. That tiny fee can be the best moderation tool in your arsenal.
 
Trump needs to be told to stick his nose out of the UK's laws on this.

If i wanted to post on another country's forum i'm expected to agree to their terms and conditions.

To me it won't bother me if i have to be so careful about what it is i post on any UK forum.

I post on xenforo(this forum) and that is a uk forum.
I am an overly opinionated person that sometimes needs to stop sticking my nose where it shouldn't go.
I'm on some heavy handed restrictions here because of my own issues.

At least i can post.
 
It appears they may allow new posters to post photos and links and then moderate reactively, that’s just stupid.
I think it may depend on how long you allow it to stay for. If you can guarantee to manually moderate frequently there is less harm than leaving it posted fr a day, but more harm than stopping before it's posted = which means everything has to be approved before posting - totally impractical.

It occurred to me that actually having something posted could actually be a good thing in certain scenarios:

  • Pedophile sends child a DM attempting to groom
  • Mods pick that up quickly and are thus able to:
    • remove the content and warn the child if they have already seen it (we can tell from their latest online activity if that is likely)
    • Contact the police which could end up with an arrest and conviction
So maybe some good is done.

A bit far fetched I know, my risk assessment will mark all likelihood as (1) negligible, because if I can afford effective age verification then children will probably not have access to DMs.
 
Last edited:
My draft risk assessment (Comments appreciated) CSEA = Child Sexual Exploitation and Abuse.

RiskRelevant Illegal ContentRisk LevelEvidence and ReasoningMitigation Measures
User Generated ContentHate Speech, Harassment, CSEA, Terrorism, etc.NegligibleUsers can post content, but the community is small and moderation carried out regularly. Evidence: Low volume of user reports, active (DBS checked) moderator presence, clear community guidelines. There have been no incidents in 17 years. Users engaging in harmful behaviour would be immediately banned and any identified illegal behaviour reported to law enforcement agencies.N/A
AnonymityHarassment, Trolling, Illegal Content SharingNegligibleUsers cannot post anonymously.N/A
User ConnectionsGrooming, Harassment, Coercive BehaviorLowUsers can connect, but the community is small and connections may be limited. Evidence: Low number of user-to-user connections.

Private messages are not available until users have posted publicly and known to have a legitimate interest in the forum topic as a professional, educator or hobbyist.

Nor are private messages available to children. With or without effective age verification this would include any potential groomer posing as a child.

A very obvious and simple to use effective private message report system is enabled and monitored regularly.
Monitor user interactions: Implement non-intrusive systems to detect and flag suspicious patterns of user interaction (e.g., excessive private messaging between adults and minors without infringing on privacy).

Implement blocking features: Allow users to block other users who engage in harmful behavior.

Educate users: Provide information and resources on online safety and how to identify and report grooming or coercive behavior.
Lack of Age VerificationCSEA, Exposure to Harmful ContentMediumAny content that is inappropriate for children is removed via regular monitoring or reports. Any users that post such content are subject to disciplinary action and, depending on the severity, would be banned and if content was deemed to be illegal would be immediately reported to law enforcement agencies.Consider age verification measures: Explore options for age verification (e.g., self-declaration, third-party verification services) while balancing privacy and accessibility concerns.
 
Last edited:
I have edited my risk assessment (see https://xenforo.com/community/threa...ions-and-impact-on-forums.227661/post-1733922) as it occurred to me that whether or not you have what they call effective age verification, if you disallow DMs to children, then any groomer wanting to pose as a child would have no access to DMs.

NB: my risk assessment is based on @eva2000 small forum template table here:

 
Now edited to include that moderators have DBS checked. Not a huge thing but I imagine all the little things add up. In fact I know that having just had a chat with a neighbour who is a safety officer on a wind farm.

Our saxophone forum isn't a wind farm, but not far off.:)
 
Last edited:
Back
Top Bottom