UK Online Safety Regulations and impact on Forums

I'd have hoped XF can trigger a webhook with new Conversations, I've yet to try it out however. Otherwise you'd want an add-on tied into the system I guess to do that.

You're of course aware of https://xenforo.com/community/resources/xenconcept-read-all-conversations.9052/ which saves the admin trawling the database to look at conversations and is probably a useful tool. I really try to avoid ever needing to look at conversations, but over the last decade we've had the odd forum where we've had to.

Another tool might be https://xenforo.com/community/resources/ozzmodz-private-threads.5856/ which is much like conversations but in a forum node. The only issue with it really is the permissions model which explicitly wont let you define access to moderators - just admins (apparently this was an original design requirement). Which is a bit of a niggle for me as I feel I want more control over who can and can't see such stuff. Still now ownership has changed maybe that will change. I thought it looked interesting as an option for more ad-hoc projects and pieces of work like that where we had a subset of members working on something and they wanted it to just be the group.

My worst case plan for conversations right now is to just chew the data out of the database and maybe scan for keywords, URLs (links and images) and dump that somewhere were it can be glanced at (in much the same way attachments can be glanced at). Not perfect, but at least better than reading private messages. Well that and if I can sort out a sensible age restriction probably limiting them to 18+

There is this add-on https://xenforo.com/community/resources/xfcoder-censored-posts-reporter.8602/ which might be a bit noisy but it generates reports when censored words are used. Still worth adding to the list of existing tools that might aid someone in their OSA quest!
 
You're of course aware of https://xenforo.com/community/resources/xenconcept-read-all-conversations.9052/ which saves the admin trawling the database to look at conversations
Yes I've just posted in the thread saying it would be useful to see conversations laid out in a forum format rather than than going into each one by one.
Another tool might be https://xenforo.com/community/resources/ozzmodz-private-threads.5856/ which is much like conversations but in a forum node.
Thanks I'll look into that, of course it could be called private threads if admins can view and moderate.
There is this add-on https://xenforo.com/community/resources/xfcoder-censored-posts-reporter.8602/ which might be a bit noisy but it generates reports when censored words are used. Still worth adding to the list of existing tools that might aid someone in their OSA quest!
Great thanks.
 
keywords, URLs (links and images) and dump that somewhere were it can be glanced at (in much the same way attachments can be glanced at).
Still seems quite a blunt tool. It's also a bit disturbing, having to learn the keywords that groomers and perverts use to get children to meet up or send images of themselves etc. I'd rather find a way that's all done by an outside agency, but then we are back to worrying how much it will cost.
 
We identified the 17 kinds of priority illegal content that need to be separately assessed. These are:
14. Firearms, knives, and other weapons
I take odds with this statement, as Firearms & Knives arent illegal (although controlled via various acts). My forum discusses "outdoors sports" and therefore hunting, shooting & fishing are featured along with the use of firearms & knives being used as appropriate tools for the task in hand.

I can also see the animal cruelty brigade shouting loudly about pictures of shot animals. Although I would say this is not anumal cruelty, as they were humanely shot by compretent marksman.
 
which AI tools do that? I don’t think members of my forum would be happy about that idea. Many of them hate anything to do with AI
You can interface with ChatGPT and ask it to scan posts for inappropriate material and flag them to a moderator. Obviously would need custom development, but isn't impossible. Most social media platforms will already be doing this I think.
 
You can interface with ChatGPT and ask it to scan posts for inappropriate material and flag them to a moderator. Obviously would need custom development, but isn't impossible.
I don’t need that as I can do it myself. My question was in regard to DMs while keeping within a privacy policy.
 
I take odds with this statement, as Firearms & Knives arent illegal (although controlled via various acts). My forum discusses "outdoors sports" and therefore hunting, shooting & fishing are featured along with the use of firearms & knives being used as appropriate tools for the task in hand.

I can also see the animal cruelty brigade shouting loudly about pictures of shot animals. Although I would say this is not anumal cruelty, as they were humanely shot by compretent marksman.
This is one bit that I don't quite understand from the global nature of the OSA. Now with regards to firearms and knives the act would appear to deal with sales of these rather than just discussion about so it could well be a moot point for your forum if you are just talking about their use.

The document you probably want to read is https://www.ofcom.org.uk/siteassets...-industry/illegal-harms/register-of-risks.pdf and in the act itself https://www.legislation.gov.uk/ukpga/2023/50/schedule/7

Ofcom say in section 14.10 in the register of risks:
14.10 Examples of firearms, knives and other weapons offences may include posting weapons for hire or sale on online marketplaces using both text and images. It could also include the sale of firearms, imitation firearms and knives to a person under the age of 18, the unlawful marketing of knives, and publication of material in connection with the marketing of knives.
But these are only crimes in the UK surely? - so how is it supposed to work? If your forum was in the US and two members are discussing the sale of a knife with two holes in the blade (illegal to own in the UK) presumably if both are US citizens in the US that's okay (I fail to see how it wouldn't be you can't just export your own laws to the rest of the world (well without a decent bit of muscle you can't!)), but if one member is in the UK then it's a breach of the act? How does it work if a UK member is viewing a US advert for a knife sale. They have not purchased said knife, but Ofcom imply the very act of posting the sale is the illegal part. So is it illegal for a US member to post about a knife sale if a UK member may see that post, but totally fine if no one from the UK sees it? I fear this way madness lies in trying to get a straight answer about many of the 17 harms.
 
I have a forum with over 100,000 members. Most of these are from many years ago because the forum has been run for about 25 years now. They are dormant accounts. And then there are all the spam accounts. To age verify all these accounts from all over the world would cost a fortune and bankrupt me. And also impossible. How would I age check a member from say China or Russia?

What solution does that leave? To ban all these members. And ban all the existing members because to age check them with a service would also be financially prohibitive. What a depressing state of affairs.
 
What solution does that leave? To ban all these members. And ban all the existing members because to age check them with a service would also be financially prohibitive. What a depressing state of affairs.
I was also wondering about existing members. perhaps a system whereby the accounts are not banned but securitylocked so they have to reset the password and then go through age check. Security lock on all ld accounts is probably good practice anyway. Existing users would need to verify on next login.
 
I have a forum with over 100,000 members. ...[SNIP]... To age verify all these accounts from all over the world would cost a fortune and bankrupt me. And also impossible. How would I age check a member from say China or Russia?
Remember that unless your forum specialised in pornographic material you don't have to do age verification. You need to assess the risks your forum poses to adults and to children and ensure you can robustly manage those risks. If your site contents is public and like most forums the subject is specialist and 90% of the posts are about that subject then odds are that you don't need to worry too much about the content beyond the odd off topic bit (much I imagine like this forum's off topic section). However I think it fair to say from reading (some, not all) of the documentation that private messages / direct messages / conversations represent the largest realistic risk to any minor using the forum. Managing that risk is a bit harder. Easiest if you can just say under 18 no PMs for you! However that then gets back to age verification ...

Regarding the China/Russia, that's down to the age verification provider you have opted to use. The larger ones can handle verification from a lot of countries, for instance the list of documents from Yoti. If you need to and can't verify their age then you need to decide what the risk is to assume they are either an adult or minor and opt for
What solution does that leave? To ban all these members. And ban all the existing members because to age check them with a service would also be financially prohibitive. What a depressing state of affairs.
If we end up using an age verification service and there are elements of our forums we feel need to be adult only (say for instance PMs or certain nodes) we'll pop all those permissions in a fresh 18+ group and remove them from everyone else. So on day one everyone will be considered a minor. We'll then move accounts we know are 18+ into the new group and we'll use some kind of age verification system to move anyone else who is 18+ into that group. I'd be a user driven process and I expect we'd have to charge to cover costs. So I'd not expect dormant accounts to incur any costs - they will just sit there with the assumption they are minors. Spam accounts will need to cough up for verification if they want those features or again we assume they are minors. Will it annoy some users - yep, are there better ways? - maybe. In essence we're effectively just making some of the forum features a one-off paid for service.


Speaking of costs I spoke with Yoti and alas they do have a monthly fee of about £200 which includes a decent number of credits for verifications, but the monthly cost is liable to make it prohibitive on an individual forum basis.
 
I have been working with a developer who has come up with a great app which sends all posts annon via OpenAI and gets checked for all the harmful content, anything flagged is then taken down until looked at by admin.

You have to buy OpenAPI tokens, in one day my spend for tokens was around $1. But that is a quiet day for my forum.

I can say on my risk assessments (mostly no risk of harmful content) that all posts are scanned for harmful content by AI and almost instantly removed!
 
Interesting. Which monthly plan are you using with OpenAI? I forget quite what they offer, but I think the different plans gave you access to their different models? Also curious how is it working out on tokens per post - I think most of our content would be from 10-100 tokens per post, but that's just my rough guess based on their length. It's been some time since I prodded any of OpenAI's models and tokenisers.
 
Interesting. Which monthly plan are you using with OpenAI? I forget quite what they offer, but I think the different plans gave you access to their different models? Also curious how is it working out on tokens per post - I think most of our content would be from 10-100 tokens per post, but that's just my rough guess based on their length. It's been some time since I prodded any of OpenAI's models and tokenisers.
I am topping up, as soon as i hit $5, $20 more is added.

Today, I have had 136 API requests (i assume posts) and that has used 75k tokens.

Yesterday it started scanning posts at 0930am:

429 requests
242k tokens.

Works out about $.81 yesterday.
 
Ofcom has now published an interactive tool to help understand how to comply with the illegal content rules. According to Ofcom you must complete your first illegal content risk assessment by 16 March 2025. According to one of my admins it takes forever to complete although you can save your progress.

 
I have been working with a developer who has come up with a great app which sends all posts annon via OpenAI and gets checked for all the harmful content, anything flagged is then taken down until looked at by admin.

You have to buy OpenAPI tokens, in one day my spend for tokens was around $1. But that is a quiet day for my forum.

I can say on my risk assessments (mostly no risk of harmful content) that all posts are scanned for harmful content by AI and almost instantly removed!
I am sure that the addon will help us significantly.
 
2000 posts a day, I'll not be paying for AI checks on those! Why pay AI when I've got thousands of members quick to report anything that breaks the site's rules. Community moderation of public posts is so much better than AI. DM's not so much, AI could be employed there.
 
Back
Top Bottom