UK Online Safety Regulations and impact on Forums

Whoops Online Safety Act violations there



If forums going to use AI bots, definitely need to ensure they do not go rogue!
Yes it is all a bit surreal! Using AI to prevent illegal harms when AI chatbots can talk inappropriately. I assume Meta has their own chatbot. The AI in my addon is simply for scanning, not chatting though.

In another scenario, AI might report someone's IP address to the police for attempting to have that kind of chat maybe?!
 
Incidentally, I don't have the video symbol on my posts for uploading videos directly. Like there is on here under the three dots. Could I have that turned off somewhere maybe?
 
I also don't like thinking about all the illegal stuff and harms I don't come across and didn't have on my site!
I know. Just testing the adult addon I had to find stuff to test it with and was really uncomfortable with some of it. Starting off with kind of soft porn (naked people OK) but then some twisted stuff called anime, although was cartoons, that was truly horrible and seemed like CSAM that I just had to stop testing.
 
Last edited:
I just saw you could "turn off" all the streaming sites in BB Code Media. That made all the old embedded links disappear (eg old X or Instagram links). So I thought that would solve it for everything except youtube. But then when testing, it still allows a new link to be pasted. Not embedded but still clickable :(

What was that solution for sending all links posted (or posts with links) to manual moderation?
 
I just saw you could "turn off" all the streaming sites in BB Code Media. That made all the old embedded links disappear (eg old X or Instagram links). So I thought that would solve it for everything except youtube. But then when testing, it still allows a new link to be pasted. Not embedded but still clickable :(

What was that solution for sending all links posted (or posts with links) to manual moderation?
You could nobble the YouTube links with word censoring.
 
From what I can see, it seems liability stops with the video embedded on your website.. I would just moderate all videos posted and not worry about what might happen if a user follows it.

Embedding a YouTube video means you're sharing content from another platform. If the embedded video is safe and you don't control subsequent content (like recommended videos), your direct liability is limited.

What I would do, is vet every submitted video, maybe automatically moderate things with YouTube embeds and then, if that video is safe, I would use JavaScript to interrupt when a user leaves my domain with a warning "You are now leaving the hamster forum, please be safe, any content found from here on is not under our direct control".

Or something like that.
 
Just looking through the Child Risk assessment quick guide I noticed this

"providers of user-to-user services to notify Ofcom of the kinds and incidence of any non-designated content you have identified as present on your service through your children’s risk assessment"

So if you find anything that could be non-designated content, you have to notify Ofcom, regardless of the size of the site.
 
From what I can see, it seems liability stops with the video embedded on your website.. I would just moderate all videos posted and not worry about what might happen if a user follows it.

Embedding a YouTube video means you're sharing content from another platform. If the embedded video is safe and you don't control subsequent content (like recommended videos), your direct liability is limited.

What I would do, is vet every submitted video, maybe automatically moderate things with YouTube embeds and then, if that video is safe, I would use JavaScript to interrupt when a user leaves my domain with a warning "You are now leaving the hamster forum, please be safe, any content found from here on is not under our direct control".

Or something like that.
Thanks. I wouldn't be worried about the content of the videos - they were completely harmless! Unless a spammer linked one but that had never been an issue before.

I think for the basic risk assessment it might not be an issue. But for the Childrens Risk Assessment, I'm not sure.
 
Ok so just been reading through the Childrens Risk Assessment Guidance and Childrens Risk Profiles. Document linked below. There are various other guides about all the harms (which we all know what they are by now).

The difference between this and the standard risk assessment, seems to be the "risk profile". Which isn't just the risk of harmful stuff being on your site, but includings assessing the risk of the harm it could do if it was on your site.

A couple of key things I noticed

1) That they class user to user services as a high risk for most of the illegal harms that would affect children. Just because they're a user to user service (they clearly haven't a clue that there are masses of perfectly safe forums out there!). User to user means high risk profile.
2) The way High, Medium and Low risk is assessed, suggests that almost no-one would be classed as low risk. And most "normal" sites would be classed as medium risk. The only thing mentioned that is likely to make your site low risk - is if all content is pre-moderated (ie sent for manual moderation).

It says even if it's never or rarely happened (something inappropriate posted), it's the amount of harm it could do even if it happened once - as well as the number of children it could affect.

So I think it would be difficult to be "low risk" on a Child Risk Assessment. And yet again this is geared for big sites IMO and they haven't a clue about normal forums. But what they are essentially saying is seriously harmful stuff to kids could be posted on any user to user site, at any time and usual mitigations aren't a get out. You either have to prove your systems have worked (which you can't if you've never had any dodgy content) or have pre-moderation - to be low risk.

Anyway I'm going to have a breather from all this - been doing it all week-end.

Any chance you could develop your age estimation program @eva2000 ?

 
Last edited:
Any chance you could develop your age estimation program @eva2000 ?
I am developing it, but it won't be ready tomorrow or soon.

Improvements were made to the backend testing of my MVP facial age verification and estimation system, including multiple model AI analyses for weighted evaluations and error margin estimations. The system can be configured to run using a single AI model or multiple AI models :)

cf-age-verification-mvp-demo-multi-model-analysis-v2-3a.webpcf-age-verification-mvp-demo-multi-model-analysis-v2-3b.webp

cf-age-verification-mvp-demo-multi-model-analysis-v2-3c.webpcf-age-verification-mvp-demo-multi-model-analysis-v2-3d.webp
 
Not sure about the age verification that scans ID ...........you can just google driving licence photos - there's a site that shows sample "fake id" cards. It was a driving licence format. And it was accepted! Because it had a name and date of birth on. Even though it had the word "Fake" written across the front. It certainly extracted the name and date of birth ready to go to the next step but I didn't continue with it.

Edit - tried it again and completed verification and it did fail and recognised it was from a screen. So pretty accurate.

I then tried spoofing a face id with an image online and it flagged that as a fake. But it got the age wrong. It said 11-17 and it was a photo of a 30 year old woman. But then it was from a computer screen so detecting age wouldn't be the same.

On the other hand, my own age was assessed as between 5 and 10 years younger (that last one was on Shufti).
 
Last edited:
I've messaged you. Don't want to start naming publicly - it may have been a one off. It rejected one that had an expired date on, but a fake one with a future expiry date, name and a date of birth, was accepted. Edit - but was rejected once it got to the final stage as it detected it was from a screen, and also then asked for a selfie, which wouldn't have matched. So it was pretty accurate and detected spoofs. (That was Shufti)

I did a few spoof attemps for face id that either failed or were rejected but I wondered if you got charged for every verification or just the accepted one? Eg if a kid was messing about trying to spoof it.
 
Last edited:
Anyway my view so far is, there are going to be costs involved whether just doing a child risk assessment or doing age verification. Because having read the CRA stuff, the onus is very much on you to ensure that not a single "harm" link or photo ever gets on your site. If it does, someone could tell Ofcom and you could be investigated and then it would either be costs for anything they recommend or shut down. It's about total prevention as I see it. So could mean paying for addons that have been tested to work reliably.

The AI one for illegal harms I have (which also detects links with harmful content and sends them for moderation) hasn't been fully tested by me and I don't know how it's possible to fully test it! Because I don't want to put a load of illegal harms through it!
 
Back
Top Bottom