UK Online Safety Regulations and impact on Forums

Am I misunderstanding something?

Doesn't the age verification only apply to content that should not be viewed by a minor?

You can make a community decision to restrict content to minors using the tools we have available, but surely unless you are serving 18+ material we don't have anything to very a users age for?
 
This is my understanding of it, yes. But there is some thinking behind “if we guarantee we never have minors, we save ourselves the headaches of making sure minors never see things they’re not meant to”. Though, realistically, I think most forums don’t need this (I’m envisaging places that have naughty words on occasion, stuff that you’d normally have to restrict to after 9pm on broadcast TV etc)
 
Am I misunderstanding something?

Doesn't the age verification only apply to content that should not be viewed by a minor?
Yes, but the issue is unless you have a suitable mechanism to prevent minors from becoming members on your site, or you can sufficiently prove that there are no minors who are members of your site then Ofcom pretty much say you have to assume some of your members are minors.

That then means you need to consider a risk assessment for them as well as adult members. I don't believe all the documentation has been published regarding the measures one might be expected to undertake if you do have members who are minors, but the fear is that it's likely to be more proactive than retrospective. ie you need to prevent the minor from seeing the post about violence as opposed to deciding the post was a bit much and retrospectively removing it, which would seem to be okay for adult members.

Another area that Ofcom see as a major risk is private messaging and letting minors use it. However of course to restrict access to things like private messaging you need to be able to determine age and that brings us back round to the mechanisms Ofcom consider are acceptable.

So no if you are not serving up adult material then you don't need any age verification, but it may well be that age verification makes life easier in terms of content moderation. So it may all end up being a moot point, but I'm certainly keen to explore the options we may have ahead of the deadlines imposted by Ofcom as any software development is liable to take some time should it be needed.
 
Doesn't the age verification only apply to content that should not be viewed by a minor?
As far as I can tell that may be correct. Without “effective” age verification the assumption is children access the site. In that case you do a child harm assessment. If there are areas of the site that could be harmful you then need age verification to keep children out of those areas.

So for most hobby sites and forums that aren’t porn, the main issue will be around DMs which can be used for harmful purposes unless of course they are public.

Currently on our site people like to use DMs and if the choice is to lose DMs or to make them only available to adults I’d prefer the latter, but would need to get an effective age verification system.
 
Last edited:
So here’s the question: what functions do DMs serve? This is going to vary community to community, but it might be worth listing what people use them for (or not) so we can talk about alternatives that might serve needs better - and maybe avoid escalating the risk.
 
if the choice is to lose DMs or to make them only available to adults I
Another option would be to make all DMs easily available to moderators. For example a list of current DMs that moderators could read. People may not be happy about this, and privacy policies would likely need updating to make it clear that DMs are moderated.

For me these choices are going to come down to cost of age verification along with polling users to get their input and opinions on DMs either being discontinued, being available to mods or maybe a paid service to cover the cost of effective age verification.
 
Last edited:
So for most hobby sites and forums that aren’t porn, the main issue will be around DMs which can be used for harmful purposes unless of course they are public.
I am really looking forward to what X will do. Since the takerover by Musk I do have more and more porn in my timeline, often as trailer for an OF account. Did not have any of this before and the amount of pornbots and porn-spam, publically visible and for noe reason pushed to my timeline as well as via DM is annoying. I did never block anyone on X, in the meantime I do have probably hundreds of blocked accounts. Also there are other postings that are considered harmful for minors (i.e. videos of deadly accidents, violence etc.), let alone the "normal" postings that have become so offensive and rageful to a massive amount that many of them would probably also fall under the new Ofcom act.
 
Another option would be to make all DMs easily available to moderators. For example a list of current DMs that moderators could read. People may not be happy about this, and privacy policies would likely need updating to make it clear that DMs are moderated.

For me these choices are going to come down to cost of age verification along with polling users to get their input and opinions on DMs either being discontinued, being available to mods or maybe a paid service to cover the cost of effective age verification.
We could scan the content of DMs with AI tools, which is likely cheaper and more efficient than age checking, and report any findings for moderator intervention.?
 
It still get me that Ofcom describe a "Small Operation" as one with less than 7m monthly views. We have 200 members and our monthly views are probably less than 1000. Several orders of magnitude less thsn "small". They need another category for "micro" organisations.
 
Last edited:
I have some experience in creating risk assessments. I am going to have a go at creating a template that others can use. I will share it here and gladly accept feedback from others. I am also chatting with some other forum owners. If we can get some "consistancy" with regards to measures we use it makes our arguements and processes stronger.
 
I have some experience in creating risk assessments. I am going to have a go at creating a template that others can use.
Thanks sounds very useful. Will this be something that applies to “micro” forums, ie for hobby forums with an admin who has a day job as well as moderating - ie something shorter than the 85 pages someone mentioned above?
 
Exactly. How on Earth are you supposed to actually and legitimately verify age?? They (authorities) HAVE to know it's impossible...
They are working on the assumption you have to give it your best shot as opposed to expecting perfection. Legislation already exists for the online sale of alcohol, dangerous tools, etc. and they have to use those "best shot" tools such as via credit card, ID card, email/phone to check against bank details, camera ID etc (all listed in post 321) as opposed to "Are you sure you are old enough to be drinking this beer or looking at naked people doing rude things?"
 
I cant see many of those agencies wanting to deal with small forums as its not worth their while.
Prospective members are unlikely to sent you their ID and even if they did, whos to say its legitimate and not doctored.
The whole thing is at odds with GDPR regulations.
 
I cant see many of those agencies wanting to deal with small forums as its not worth their while.
Prospective members are unlikely to sent you their ID and even if they did, whos to say its legitimate and not doctored.
The whole thing is at odds with GDPR regulations.
You can make it a requirement that if they have a form of ID they only have to tick a box that has radio buttons with "yes" "no".
With the answer "yes" takes them to ID care website.
So that it is legit.
 
I am going to have a go at creating a template that others can use
You could work with @eva2000 who has already done a template that is on github (see post). That's what we were going to use as our starting point.

which AI tools do that? I don’t think members of my forum would be happy about that idea. Many of them hate anything to do with AI
Whilst not scanning and actioning anything the closest I have found is https://xenforo.com/community/resources/al-perspective-api-2-x.9175/ which has been mentioned a couple of times in this thread. They do have a demo forum where you can try it out.

Otherwise with XF2.3+ you could use the webhook interface I assume to send posts off to an AI engine for some kind of analysis and process the results. Obviously something more tightly integrated would be better I expect, but these days I don't really code PHP so my thoughts naturally stray to the alternative options.

I cant see many of those agencies wanting to deal with small forums as its not worth their while.
I suspect the most small forums that are not doing anything fundamentally wrong (ie chat about a totally legal everyday subject) would at most get generic emails from Ofcom if they felt a breach was happening (which is realistically only going to come to light from someone reporting your forum (probably a disgruntled troll), or if Ofcom commission some giant AI crawling monster that goes out looking for trouble). This is why I think realistically doing the risk assessment(s) and using this as an opportunity to improve some of the moderation tools we already have is probably going to be all that can be done for small sites and hopefully should be enough.

Prospective members are unlikely to sent you their ID and even if they did, whos to say its legitimate and not doctored.
The whole thing is at odds with GDPR regulations.
Generally there seem to be two approaches:

The first where you have information about the user (say from a purchase) and you submit this information to a checking agency - they will then try to validate age from that information and if need be ask the user for more info (this is the approach https://www.verifymyage.co.uk/ take as their product seems mostly aimed at purchases of age restricted goods - so typically there is already a delivery address and so forth known). I'm about to start to have a little play with their service to see what it is like.

The second approach has the user essentially own a digital ID and present that digital ID to the website in question. The main player here seems to be Yoti who power things like https://www.postoffice.co.uk/identity/easyid - so the member has their ID and can give you just enough to validate what you need (ie 18+) - in that latter case all the private information stays with Yoti (all you get is they are 18+ you don't get anything more than that - no name, address or anything sensitive). So in theory all GDPR fine (assuming you even want a digital ID - but it may well start to become a requirement though side-loaded legislation like the OSA since upfront digital IDs have not proved popular with the public, but are popular with governments. So direct legislation is liable to fail. Safer to do it as a "wont people thing about the children" legislation)

In both cases the site owner pays for the validation service. All these services are really geared up for business to business however which isn't going to work well for individuals running little sites. I would expect there to be some enterprising options hitting the market to cater to smaller sites, in fact I'm a little surprised they are not already a few around. However I may just have not found them. I'm waiting to hear back from Yoti about pricing, but the verifymyage lot charge about £1 per check so I'm assuming it'll be something of this order (unless there is a minimum commit). If they get back to me I'll update.
 
Otherwise with XF2.3+ you could use the webhook interface I assume to send posts off to an AI engine for some kind of analysis and process the results.
However posts aren’t going to be the issue, we were talking about DMs which surely is a whole different thing. (Unless they become public). That is currently my main concern.
 
Back
Top Bottom