UK Online Safety Regulations and impact on Forums

Chat GPT says it would likely cost between $500 and $2000 to hire a developer to integrate Verifymy's API into Xenforo :rolleyes: .
 
Interesting article here:

"...... So perhaps spare a thought for those who are getting to grips with core and enhanced inputs, puzzling over what amounts to a ‘significant’ number of users, learning that a few risk factors may constitute ‘many’ (footnote 74 to Ofcom’s General Risk Level Table), or wondering whether their service can be ‘low risk’ if they allow users to post hyperlinks. (Ofcom has determined that hyperlinks are a risk factor for six of the 17 kinds of priority offence designated by the Act: terrorism, CSEA, fraud and financial services, drugs and psychoactive substances, encouraging or assisting suicide and foreign interference offences).

Grumbles from whichever quarter will come as no great surprise to those (this author included) who have argued from the start that the legislation is an ill-conceived, unworkable mess which was always destined to end in tears. Even so, and making due allowance for the well-nigh impossible task with which Ofcom has been landed, there is an abiding impression that Ofcom’s efforts to flesh out the service provider duties - risk assessment in particular – could have been made easier to understand.
 
I don't think this is as complicated as you're making out.

The diagram is pretty simple

View attachment 320933

Whilst we all answer yes for stage one, a lot of forums can easily answer no to both questions in stage two and thus not need to do a child assessment.

It even gives a load of forums as examples at the end of the document.

If your forum is unlikely to be of interest to children, all you need to do is log the date, your reasoning of how you reached that conclusion.

I'd say automotive forums are pretty safe there, but I can see why a hamster forum might fall foul.

Then we just keep it filed? Do we have to actually submit anything anywhere?
Just going back to this - I think the guidance is clear that you have to assume you have children registered on your site if you don't have age verification software. Because otherwise there is no way of proving that you haven't (even though, admittedly, they are not likely to register for an automative service). The wording is in the first bit of the second box "Are there a significant number of children who are users of this service". How many people do you have registered who don't post? How can you show none of those are children? According to Ofcom, you can't know unless you have age verification software (and that doesn't work retrospectively even if installed now) so they are basically saying everyone should assume children can access the site if you don't have age verification software - which means everyone needs to do a child risk assessment.
 
Ok my conclusions so far.

If no age verification software, then to mitigate for a child risk assessment I would need:

1) No DM's
2) No embedded youtube videos (or any other video option due to server overload)
3) Any hyperlinks posted to go for manual moderation
4) Wide ranging key word auto moderation set up
5) New registrations prevented from posting links
6) An additional moderator or two.

Which would dumb down the site a lot.

To avoid all this and to have age verification software I would need:

1) $500 to $2000 to pay a developer to integrate the API into Xenforo
2) Charge 50p on registration to cover the 35p per age verification check
3) Everything else could stay as it is now except maybe one more part-time moderator
4) No DM's (not a great loss on this particular site).


One thought I had. If enough people wanted Age Verification, couldn't we make a group and all contribute towards paying a developer to develop a plugin we could all use? Which would only be for the verifymy option though.
 
Wed
Are there a significant number of children who are users of this service". How many people do you have registered who don't post? How can you show none of those are children?
You could argue that lurkers would have the same ratio of children/adults as active users.
Everything else could stay as it is now except maybe one more part-time moderator
Ideally to cover international relevant time zones

One thought I had. If enough people wanted Age Verification, couldn't we make a group and all contribute towards paying a develop to develop a plugin we could all use?
Yes that might be possible. Such a thing may turn my non-profit into all/loss and given that any (otherwise) profit is mostly donated to charities that help special needs children it is rather ironic. (To date probably over £100,000)

This years profit of about £4k would have gone to a local school for mostly autistic kids and is no being held back just in case.
 
Just going back to this - I think the guidance is clear that you have to assume you have children registered on your site if you don't have age verification software.
If you look at the case studies at the bottom of that document, it's very clear that's not the case ;)

SME retirement forum
A micro-business sets up a forum where users can discuss retirement plans.

Such a service would not be targeted at children, and this would likely be reflected in its business plan and marketing, therefore the service may conclude that children do not form part of the service’s commercial strategy. The service provider may also consider that the service does not provide a benefit to children, that the content would not appeal to children, and that the design is not attractive to children. The content on the forum is about retirement plans and would therefore appeal to adults. The service provider in this case may therefore conclude that the service is not of a kind likely to attract a significant number of users who are children.

Such a service may also have user data to suggest that there are not a significant number of children on the service. For example, the service’s total user base is 5,000 UK monthly users. It has never deleted any accounts due to reports of the user being a child, or received complaints or reports about users who are children on the service.

The provider concludes that the child user condition is not met. The provider records the date, the outcome, the steps taken, and the evidence used to justify their conclusion.

Community forum
A service is offering an online community forum on travel, building friendships and overcoming challenges. The service is targeted at adult users who are 40 and over. The articles and discussion on the forum relate mainly to travel for women over 40. The other content on the forum discusses new job opportunities for women seeking a career change.

Such a service would not be targeted at children, and this would be reflected in the content of the service. The service provider may also consider that the service does not provide a benefit to children, that the design is not attractive to children, and the advertising on the service is targeted at an older adult demographic.

The provider considers that the size of its user base allows it to profile it accurately using internal information. It may analyse a range of user data, which suggest that existing users are highly unlikely to be children. It may also look at publicly available statistics on children’s access to services similar to its own. The service also considers whether there have been any reports of users being children, or any under-age accounts being blocked. It has never deleted any accounts due to reports of the user being a child, or received complaints or reports about users who are children on the service. 30

The provider concludes that the child user condition is not met. The provider records the date, the outcome, the steps taken, and the evidence used to justify their conclusion.


It also doesn't say no children are registered, just that a significant number are registered / make up the userbase.
 
If you look at the case studies at the bottom of that document, it's very clear that's not the case
Those case studies are useful in that for one of my forums, a residents association is targeted at property owners with a small minority of renters, so not targeting children.

The saxophone forum though is a different kettle of fish. Children have been very rare and but I’m sure when there are we can tell as they might ask questions about their high school band director being a ••••

But i can think of two examples in 15 years and we also had an age poll:
IMG_7120.webp
 
Last edited:
Those case studies are useful in that for one of my forums, a residents association is targeted at property owners with a small minority of renters, so not targeting children.

The saxophone forum though is a different kettle of fish. Children have been very rare and but I’m sure when there are we can tell as they might ask questions about their high school band director being a ••••

But I css as. Only think of two examples in 15 years and we also had an age poll:
View attachment 320971
That was a good idea. Kind of indirect age questioning.
 
If you look at the case studies at the bottom of that document, it's very clear that's not the case ;)






It also doesn't say no children are registered, just that a significant number are registered / make up the userbase.
Fair enough. If your forum falls into a case study like one of those. The whole guidance is contradictory in places.
 
I would have great difficulty stopping 13 to 17 year olds joining up without age verification software - with the type of topic it is. Child's pet. Younger kids it tends to be parents who join up but many teens have their own pet.

Mitigating for a child risk assessment would be added work and pressure. An additional moderator would be hard to achieve - people have busy lives. I did have a second one previously but they left for health reasons and it wasn't too busy so fine with just one.

While the only real "risks" are spam links (which is very low risk with good spam prevention), I think I would not be in a very relaxed state wondering if someone was going to say something or post something that might upset a child or someone might complain. So age verification would give me peace of mind. No doubt the 16 and 17 year olds would get a parent to sign up and let them know the login .........

The Child Risk assessment stuff is very in depth though. Even talking about photos having exif data that could lead to communication between two people - what?!

I just don't want the pressure and responsibility of being solely responsible for child safety, with such an intricate legal act in the background, or risking complaints or reports.
 
Last edited:
Ok not good news. Verifymy just came back saying they'd want £2000 up front and it would be 50p per check, not 35p (with the volume required - ie 100 or less per month). Just as I had started seriously considering finding a developer.
 
The Child Risk assessment stuff is very in depth though. Even talking about photos having exif data that could lead to communication between two people - what?!
We do periodically have that woe - since people use phones a lot more now and there is GPS data in the EXIF I know of a few occasions when I've had to remind people to remove the EXIF data for privacy rather than essentially telling the world where they are.

£2000 up front and it would be 50p per check, not 35p
That's disappointing I was just ploughing thorough their docs and pondering how you'd do the flow in XF
 
It is indeed disappointing. They also suggested it could be up and running in 24 hours if using a different platform. But didn't say which one. ie presumably no API integration needed. Which is not particularly helpful when you don't have £2,000.
 
Just throwing this into the mix - Is anyone qualified to carry out a Risk Assessment?

If you do carry out your own RA, what happens if that RA is challenged, or omissions found?

This has to be one of the most ill-conceived ideas I've yet to come across.
As for setting the age of a child as anyone under 18 -> you can get married in the UK at 16, legally give birth at 16 (+9months), you can even drive a car and join the Army at 16. Some would even have you believe that you can change your gender.

Another question - what about Facebook Groups, how do you control the use of them when FB themselves cannot?
 
You can't get married until 18 now - the law changed (although it might be different in Scotland). 17 to drive. The 18 and over age limit is just related to "online harms".

And that is an issue. We aren't compliance lawyers. But Ofcom did produce this guide for small sites. They make it sound simple enough - until you come to the children aspect and separate child risk assessments which this guide doesn't go into. It's just for the initial, basic risk assessment.

"If organisations have carried out a suitable and sufficient risk assessment and determined, with good reason, that the risks they face are low, they will only be expected to have basic but important measures to remove illegal content when they become aware of it. These include:


  • easy-to-find, understandable terms and conditions;
  • a complaints tool that allows users to report illegal or harmful material when they see it, backed up by a process to deal with those complaints;
  • the ability to review content and take it down quickly if they have reason to believe it is illegal; and
  • a specific individual responsible for compliance, who we can contact if we need to."
 
Some would even have you believe that you can change your gender.
That would depend where you live, in the UK and most states in the US I think it’s 18. I’m not sure about other countries or states but we are talking about legal requirements not beliefs.
 
Any unqualified attempt at a risk assessment is instantly failed when you read this:

User base demographics​


The demographics of your user base, including users’ protected characteristics, media literacy levels, and mental health, may influence the risk of illegal harm. Vulnerable users, particularly those with multiple protected characteristics, are more likely to experience harm from illegal content and are impacted differently by it. We would expect you to consider these dynamics when you assess the risk of each type of illegal harm.


These dynamics are highly complex and context-specific, and evidence is provided in the Register of Risks (PDF, 4.65 MB) on user base demographics for each kind of illegal harm. This can help you assess this risk factor even if you do not have any service-specific information on the demographic of your user base.
 
Any unqualified attempt at a risk assessment is instantly failed when you read this:

User base demographics​


The demographics of your user base, including users’ protected characteristics, media literacy levels, and mental health, may influence the risk of illegal harm. Vulnerable users, particularly those with multiple protected characteristics, are more likely to experience harm from illegal content and are impacted differently by it. We would expect you to consider these dynamics when you assess the risk of each type of illegal harm.


These dynamics are highly complex and context-specific, and evidence is provided in the Register of Risks (PDF, 4.65 MB) on user base demographics for each kind of illegal harm. This can help you assess this risk factor even if you do not have any service-specific information on the demographic of your user base.
Maybe we all need to give up then, hamsters or no hamsters 🐹
 
You can't get married until 18 now - the law changed (although it might be different in Scotland). 17 to drive. The 18 and over age limit is just related to "online harms".

And that is an issue. We aren't compliance lawyers. But Ofcom did produce this guide for small sites. They make it sound simple enough - until you come to the children aspect and separate child risk assessments which this guide doesn't go into. It's just for the initial, basic risk assessment.

"If organisations have carried out a suitable and sufficient risk assessment and determined, with good reason, that the risks they face are low, they will only be expected to have basic but important measures to remove illegal content when they become aware of it. These include:


  • easy-to-find, understandable terms and conditions;
  • a complaints tool that allows users to report illegal or harmful material when they see it, backed up by a process to deal with those complaints;
  • the ability to review content and take it down quickly if they have reason to believe it is illegal; and
  • a specific individual responsible for compliance, who we can contact if we need to."
The sticking point for me, even with that simplified version is "taking it down quickly". Without 24/7 moderation.
 
Back
Top Bottom