UK Online Safety Regulations and impact on Forums

It all comes across, particularly regarding the blocking of access to pornography by children, as putting a heavy load on legit organisations which are trying to comply, with any illicit sites who don't really care probably being out of realistic reach of OFCOM.
The most they will be able to do with most non-compliant sites hosted outside of the UK is request ISPs to block them, which anyone who wants to will get around with a VPN.
Yeah so easy to get around it though still not without costs in terms of $$$$ and time and effort

Yeah interesting debate from all sides https://www.biometricupdate.com/202...uld-governments-pay-to-protect-kids-from-porn

Government projects mean public money, and the question of whether age assurance for porn is a valid use of tax dollars hangs over the deployment of government-led initiatives. The UK has joined Europe in launching a UK.gov wallet product that could facilitate age assurance through mDLs, causing consternation among industry voices that governments could crowd out private enterprise, putting a damper on a burgeoning industry.

Brandstätter quotes Julie Dawson, chief policy and regulatory officer at Yoti, who points out that “there are age assurance approaches which are ready today, operating at scale globally that are accepted by industry, platforms, and consumers already.”

Others are making the same point, and preparing for a shifting landscape. The Global Age Assurance Standards Summit featured a panel moderated by the Age Verification Providers Association (AVPA) on “The Future of Age Assurance in the Face of Competition from Government and Big Tech” – conjuring not just the specter of the Man, but also the looming shadow of the tech giants that have defined most online experiences in the 21st century.

“The EU’s white label app aims to accelerate the ability to use the EU Digital ID wallet for age verification, and addresses some of the concerns that the Architecture and Reference Framework cannot support truly anonymous, authenticated online age checks,” AVPA Executive Director Iain Corby told Biometric Update in an email.

“It is inevitable that states will issue digital ID in one form or another, but we argue in favour of consumer choice, with many users reluctant to use a government-issued key to access the Internet. We strongly support the principles of equality, portability, choice and interoperability required to deliver a vibrant, innovative and competitive public and private sector ecosystem for digital identities and age assurance.”

Various publics may have a hard time swallowing the idea that their tax dollars should pay to protect kids from websites that resist paying for online safety measures themselves. The bigger threat, perhaps, is Meta or Google seizing the age assurance mantle, further consolidating digital access among giants, and leaving independent providers behind.
 
Latest version of my facial age verification system with frontend MVP demo (left image) and dedicated test endpoint to test 2d portrait photo images and various age thresholds (18, 21, 23, 25 etc) against my AI facial age verification analysis to see how it performs for age estimation, confidence level, and its reasoning for such determination. It has been surprisingly accurate for 2d portrait photo images, at least within 0-3 years of known age, using a small set of tested images. But there have been some false results as well for folks who do not look their age.


cf-age-verification-mvp-demo-v7-1.webpcf-age-verification-mvp-demo-v7-2.webp
 
Thanks for heads up. For age verification https://www.ofcom.org.uk/online-safety/protecting-children/age-checks-to-protect-children-online

Same as I understood it. Everyone has to do the children risk assessment first. But as mentioned, age assurance implementation can wave that children risk assessment. Seems cheaper to do the risk assessment than implementing age assurance!
It might be cheaper but other software might be needed and it would no doubt dumb down a forum with all the mitigations required and create a lot of extra work............. that's the issue really. Whereas with age verification, a site can more or less just carry on as before. Which is why I think it's worth paying for age verification (within reason!). Everyone has to do the basic "Child Access Statement". If, from that, you deem your site is likely to attract children or likely to be accessible by children, then you only need to do the Child Risk Assessment if you don't have age verification software. Which is my bugbear personally as I would rather have age verification. It's worrying and onerous mitigating for a Child Risk Assessment with the detail it goes into (but I haven't read the new guidance yet .........)

This bit "They should then implement safety measures to mitigate those risks [2]."
 
It might be cheaper but other software might be needed and it would no doubt dumb down a forum with all the mitigations required and create a lot of extra work............. that's the issue really. Whereas with age verification, a site can more or less just carry on as before. Which is why I think it's worth paying for age verification (within reason!).
Yeah I can see it from both sides. Definitely glad I started working on my own facial age verification system, as UK isn't only country going down this road. Here in Australia they are doing the same!
 
Yeah I can see it from both sides. Definitely glad I started working on my own facial age verification system, as UK isn't only country going down this road. Here in Australia they are doing the same!
I'm glad you have too! Do you think it can be developed? Presumably for purchase or similar?
 
This says "Today we are publishing a major policy statement ............." So it's a statement rather than new guidance (I think the MP response earlier said something else would be published end of this month?).

Actually there are some links in there .......

"We have also updated our existing record-keeping and review guidance to include specific guidance on record-keeping for children’s risk assessments."

"A summary of the measures in the Codes and the user-to-user and search services to which they apply."

This bit is a bit vague - so no actual guidance yet?

"In the coming weeks, we will also be updating our resources for services to take account of the new children’s duties."

So a load more stuff to read and no Child Risk Assessment guidance :rolleyes:
 
Just found an example talking about forum size - 50,000 UK users per month is a "relatively small" service.

Now the example for content promoting hopelessness/depression is quite interesting. A forum with 10,000 UK users per month and has no age assurance in place, T&C prohibiting the harmful content. Only does post-posting review, but reviews everything (which if you are a teeny forum you probably do skim over everything) in time and has a report system. No history of any woe. Well they say that could count as negligible risk.

@Alvin63 There are some explicit exemptions in the animal cruelty section, since I know you were concerned about posts from people asking for help. All the following would be considered okay:
  • A video of an animal in poor living conditions, appealing for further information (such as about how the animal came to be living in such conditions).
  • Images of animals in poor living conditions being used to fundraise.
  • A video of animals in poor living conditions as part of an animal cruelty awareness raising campaign.
So I'd think if you had encouraged users to maybe use the spoiler tag to hide anything graphic from immediate view you'd be fine on that front even with child users.
This bit is a bit vague - so no actual guidance yet?
See https://xenforo.com/community/threa...ions-and-impact-on-forums.227661/post-1742765 above.
 
Thank you - mountains of stuff to read :rolleyes:

Ok so looking at the summary of measures in the "Codes at a glance" which presumably is related to the Child Risk Assessment? Or is it just a repeat of the earlier codes for the main risk assessment?

These are the requirements for all user-to-user services (excluding the requirements of large user-to-user services and multi-risk services).

PCUA2 - Name an individual
PCU G1 - Terms of Service - Include all information mandated by the Act in terms and statements regarding the protection of children
PCU G1 - Include all information mandated by the Act in terms and statements regarding the protection of children

Don't have time to go through all that .........will give a brief summary

What is PC and PCC?

Have a content moderation function to review and assess
Have a content moderation function that allows for swift action on content harmful to children, where it is currently technically feasible to take appropriate action (maybe a nod to smaller forums?)
Complaints processes
Appropriate action for relevant complaints
Action following determination for relevant complaints
More about complaints for non compliance

 
I'm glad you have too! Do you think it can be developed? Presumably for purchase or similar?
Definitely possible, though legal and other overhead costs would also raise prices a lot. Not too fond of being a provider for this. Initially, developing for my own use and my paid clients as a product solution, the clients roll out their own version so they would handle all the legal and other stuff themselves. Though way off from that, while just in MVP testing to see what the system is capable of.

Anyone with a Cloudflare account can set up most of the frontend stuff. So you'll know the system would scale with a lot of traffic, too. Cloudflare Workers' paid product is $5/month, with usage billables https://developers.cloudflare.com/workers/platform/pricing/ and Worker Logs https://developers.cloudflare.com/workers/platform/pricing/#workers-logs. There is also a cost on the AI end for input/output/images, etc.
 
Last edited:
Just found an example talking about forum size - 50,000 UK users per month is a "relatively small" service.

Now the example for content promoting hopelessness/depression is quite interesting. A forum with 10,000 UK users per month and has no age assurance in place, T&C prohibiting the harmful content. Only does post-posting review, but reviews everything (which if you are a teeny forum you probably do skim over everything) in time and has a report system. No history of any woe. Well they say that could count as negligible risk.

@Alvin63 There are some explicit exemptions in the animal cruelty section, since I know you were concerned about posts from people asking for help. All the following would be considered okay:
  • A video of an animal in poor living conditions, appealing for further information (such as about how the animal came to be living in such conditions).
  • Images of animals in poor living conditions being used to fundraise.
  • A video of animals in poor living conditions as part of an animal cruelty awareness raising campaign.
So I'd think if you had encouraged users to maybe use the spoiler tag to hide anything graphic from immediate view you'd be fine on that front even with child users.

See https://xenforo.com/community/threa...ions-and-impact-on-forums.227661/post-1742765 above.
Thank you, I'll look at that properly later. Tbh most of my members wouldn't have a clue how to use a spoiler 🤣 They are mainly not very tech minded and some struggle to even upload a photo! It's photos of an animal with an injury or whatever I was worried about (eg eye popping out) when they have concerns - or a tumour ........ Which can look quite nasty - comments like "what's this?". Of course they are advised to see a vet ....
 
Seems to be a lot of publicity but no sign of the new guidance!
Indeed. It's headline news today and coming under a lot of criticism from organizations that don't believe it goes far enough. I have had a wander through OFCOM's site but I can't see anything new but then again it's such a huge maze of information I may well have missed something.
 
What is PC and PCC?
PPC is Primary Priority Content (harmful to children) and PC is Priority Content (harmful to children)

So far I'm not seeing lots about totally preventing a child from encountering the content full stop at least which was my main worry. Still I need to properly read all 491 pages sigh

I have had a wander through OFCOM's site but I can't see anything new but then again it's such a huge maze of information I may well have missed something.
https://www.ofcom.org.uk/online-saf...atement-protecting-children-from-harms-online is probably the best place to start.
 

It’s no secret that the porn industry does not believe current options for online biometric age assurance work. It’s also no secret why: when sites such as ******* have made honest efforts to comply with local laws by implementing facial age assurance tech, their traffic has tanked.

Regulations are meant to moot the issue – but it’s unlikely that many sites will comply without a clear demonstration that regulators are willing and able to enforce their rules. That means levying fines as mandated. And they are big fines: Ofcom has the power to soak Bang Bros for up to 10 percent of a company’s qualifying worldwide revenue, or a maximum of £18 million (US$23.4M), whichever is greater.

On their LinkedIn account, the Age Verification Providers Association (AVPA) argues that effective enforcement boils down to a simple formula: does the cost to comply outweigh the likely cost of not doing so?

“A regulator must increase both the likelihood of being caught and the penalty,” AVPA says. “[1 percent chance of a fine] x [fine of £10,000] means that if adding age verification costs more than £100 it’s worth risking it and ignoring the law.”

On the other hand, a full application of Ofcom’s authority could look quite different for a massive porn site raking in hundreds of millions in revenue.
The message is loud and clear: a legless dog can be kicked about – and not just by the Bang Bros, but also the tech bros. MLex has an interview with Michael Murray, head of regulatory policy for the UK Information Commissioner’s Office, who says that social media platforms “have a duty not to process children’s data, and shouldn’t wait for wide-ranging government solutions such as the EU interim age verification app or new digital ID frameworks.”

Murray says it’s “very likely that under-13s are on these platforms, and then they’ll potentially be processing the personal information of under-13s unlawfully.”

The question is, are platforms waiting for better solutions, or for the right amount of regulatory pressure? Per MLex, social platforms Imgur and Reddit are currently under investigation by the ICO for how they process the personal information of children in the UK and their implementation of age assurance measures, and TikTok is being investigated for potentially using data from people aged 13-17 to feed its recommender systems.

The piece quotes Chelsea Jarvie, a cybersecurity expert and researcher at Glasgow’s University of Strathclyde, who notes that the social media giants will resist change to their business model, “because regardless of the user’s age, targeted ads and user profiling are how free social media platforms make money. If they can no longer monetize child users, it will directly impact their profits.”

Once again, it boils down to profit. Even if they’re willing to try out options for restricting access, social media and porn sites will ultimately do what they have to to keep making money. Until noncompliance tips the cost balance, that’s unlikely to change.

For this reason, there is particular interest in the case of the unnamed platform under investigation by Ofcom for hosting a suicide discussion forum. The regulator has the power to demonstrate just how costly it can be to ignore online safety laws. But is it willing to bankrupt a few firms to get the message across? And if so, just how big a sacrifice will be needed to finally win the dog of regulation its legs?
 
PPC is Primary Priority Content (harmful to children) and PC is Priority Content (harmful to children)

So far I'm not seeing lots about totally preventing a child from encountering the content full stop at least which was my main worry. Still I need to properly read all 491 pages sigh


https://www.ofcom.org.uk/online-saf...atement-protecting-children-from-harms-online is probably the best place to start.
This is why I would rather have age verification. It’s all far too time consuming and confusing.
 
Back
Top Bottom