Direct Message log, re: Online Safety Act

Mr Lucky

Well-known member
Someone correct me if I’m wrong please but my understanding is that if direct messages could be moderated this goes a long way towards a low risk assessment for the online safety act.

My current risk assessment shows direct messages to have the highest risk level.

Currently seeing direct messages of others as a moderator (with no access to the database) is possible only via a login as user add-on. However having to check individually for every member would be impractical in most cases.

So this suggestion is for a log of all direct messages with mod viewing permissions so that they can be checked for anything harmful.

Ideally there could be an option for only DMs involving members identified as under 18 years (based on user field). EDIT: however this only addresses one or two types. of the l17 listed harms)

This would basically extend the current site admin's ability to view the contents of xf_conversation_message to moderators with permissions, but in the more convenient form of a log in the ACP.
 
Last edited:
Upvote 7
Would that not be an invasion of privacy though?
I would not like to think that my DM's could be read at will by another, moderator, admin or otherwise. In fact we've a strict policy on our site that private messages (now direct messages) are strictly private unless we're granted permission to view them, or we need to for legitimate reasons.

A better idea would be that Direct messages could be checked by some sort of cron job against a list of words/phrases and flagged up to the admin/moderating team if there was a match so that action could be taken.
Or a seperate profanity filter for DMs that is more comprehensive than one used on the public side perhaps
 
Would that not be an invasion of privacy though?
Only if you don't tell members about it (e.g. in a privacy policy) and they assume DMs are private.

In fact we've a strict policy on our site that private messages (now direct messages) are strictly private
In which case this suggestion would not be for you. I'm not suggesting it is compulsory.

A better idea would be that Direct messages could be checked by some sort of cron job against a list of words/phrases and flagged up to the admin/moderating team if there was a match so that action could be taken.
That is not as effective as monitoring. I'm sure a clever groomer could round that. I think it's a good idea in addition but not necessarily a better one and would be a different suggestion
 
'Not telling anyone' would go against the site's/admin's integrity should it be revealed
Yes, I was agreeing with you that in that case it would an invasion of privacy. But If your privacy policy makes it clear DMs can be moderated then I think it's fine.
n reality to read every DM looking for issues would require full-time moderation, 24hours a day on busy forums. Even FaceBook doesn't do that

Not on a small/medium forum. We average only a few DMs a day. But not my suggestion also sates it would be good if it could be linked to effectively age verified children only. That would be very few on most forums but is a very good way to provide a low risk assessment.
 
So this suggestion is for a log of all direct messages with mod viewing permissions so that they can be checked for anything harmful.

Ideally there could be an option for only DMs involving members identified as under 18 years (based on user field).
My understanding is that your obligations under the Act requires you and your 'senior managers'* to monitor both public and private messages using 'accredited technology' regardless of any age restrictions.

*A suggestion that site admins are liable for information offences in addition to the owner.

If you want to try and find some verification, I believe it's at least partially covered in Clause 110 of the Act.
 
to monitor both public and private messages using 'accredited technology' regardless of any age restrictions.
Agreed, the age verification would only count in regard to the child specific harms - but I am assuming every little helps towards a lower risk, but yes -= it's only one part.
 
Last edited:
Even FaceBook doesn't do that
Yet.

The bill would establish a regulatory framework for certain online services. These include user-to-user services, such as Facebook, and search services, such as Google. The government’s aim in introducing the bill is “to make Britain the best place in the world to set up and run a digital business, while simultaneously ensuring that Britain is the safest place in the world to be online”.
 
The other issue is if you remove the ability to use one platform those whom this bill is targeted at will simply move to another. Have the UK/EU governments not heard of the Dark Web as an example.
It's another example of safeguarding rather than education - similar to reducing speed limits, introducing pedestrianisation, closing roads, to reduce vehicle/pedestrian accidents rather than teach people the art of crossing roads safely and using pavements.
Whatever happened to the Green Cross Code man ?
 
Using technology to flag messages is more feasible and probably more reliable.
I would also like to see AI aided DM monitoring, ie to flag up suspicious DMs based on certain keywords, image detection etc. I would have added this to OP but missed the edit deadline. it would be a useful addition to your moderator panel ;)
 
Last edited:
Using technology to flag messages is more feasible and probably more reliable.
The official recommendation is to use 'accredited technology' and I agree it's the only practical answer.

That doesn't necessarily help the average non-profit which has limited options that come with a price tag. A moderation log would negate any additional costs.
 
That doesn't necessarily help the average non-profit which has limited options that come with a price tag.
This has been my experience, even trying to get an answer from accredited technology companies has been difficult. I imagine they are swamped with enquiries and work at the moment.

But yes, this suggestion is only for small forums. One of mine is a resins association with no budget for anything from the carol singing and the occasional street picnic. And Huw Edwards doesn't live here.
 
Would that not be an invasion of privacy though?
You probably need to update your privacy policy and maybe do it for UK visitors/members only? But seriously many folks will have troubles with this. OSA requirements would need relaxing for some.

This has been my experience, even trying to get an answer from accredited technology companies has been difficult. I imagine they are swamped with enquiries and work at the moment.
OSA hasn't even released everything yet - that's a few months away, so those companies probably just waiting to see as well
 
OSA hasn't even released everything yet - that's a few months away, so those companies probably just waiting to see as well
I'd like to think in the meantime the developers of forum software here there and everywhere are in the process of developing tools to help with compliance. After all every single one of them runs a support forum and have at least as much to lose as the rest of us.

So far I've only seen one (open source) development team willing to engage in any conversation and their bottom line was 'consult your lawyer'.
 
So far I've only seen one (open source) development team willing to engage in any conversation and their bottom line was 'consult your lawyer'.
Which is not very helpful for small forms that are non-profit or else self funded even.

xenForo have discussed this, and although there are no plans nothing has been ruled out, but after attending an online webinar by Ofcom it seems things may have changed since they looked at it earlier:
For the most part, XenForo is already compliant and upon reviewing the legislation, there's very little (if anything) the average forum administrator would need to do.

For anything else, the specific feature(s) that are needed should be created as their own suggestion (one suggestion for each thread).

Which is why I created this thread for the one suggestion (I presume should be easy either for xenForo or a developer.) re: a DM mod log. Hence I appeared a bit surprised when the conversation seems to have drifted to the OSA itself.

I may be wrong but unmoderated DMs seem to be the weakest link when trying to get a good score - ie what would be a green or at worst yellow in the typical risk matrix (this one is for physical safety not online but the principle applies, ie likelihood vs consequences.


Screenshot 2025-02-16 at 17.02.53.webp

If you take out unmoderated DMs then it beckons a 1 or at worst 2 (you can't say it won't happen but the harm won't be for long and you can step in and stop it)
 
Back
Top Bottom