Better functionality to report and moderate illegal hate speech

Alpha1

Well-known member
As the first EU nation has now approved a bill to impose hefty multimillion fines on communities for not removing abusive content within 24 hours, we are in dire need of better functionality for content flagging, reporting and moderation. More information below.
We need to have functionality to:
  1. Have members and guests flag abusive content and define exactly what type of problem the content has.
    For example by selecting the type of rule breach from a drop down on the report interface.
  2. Have the content moderated if the content falls in the category hate speech, flaming, slander, insult or fake news.
  3. Show reports with abusive content on top in the report center and clearly mark them as urgent.
  4. Automatically assign the report to the responsible moderators.
  5. Show the amount of time passed since the report was opened.
  6. Send an alert to the responsible moderators, supermoderators and administrators.
  7. If the report is not resolved within X hours, display a warning that the report is overdue or send reminder alerts about this.
  8. Optimally there would be one click macros for the most common moderator actions.
    For example: delete post, warn member & resolve ticket.

Due to the extreme fines that webmasters face (up to €50 million), I suggest this for both XF1 and XF2.

Background information:
Germany bill imposing €50M fine for failure to remove online hate crime, fake news fast enough
EU court: pay damages for ineffective Post Report moderation system.
Google will let users flag offensive content
 
Last edited:
Upvote 34
They will go after smaller boards.
I'll believe it when I see them go after a motorcycle forum. It's more likely going to be the chans and fruits of the forum world, not hobbyist forums, unless they just let it run rampant (which is not running a forum in the first place, and puts it in the aforementioned category of forums they're going for).
 
Until I get a personal message that it crosses the line, but @Chris D will have to learn Japanese in his spare time cause translators cannot grasp the nuances of Japanese since nouns aren't used, to make certain of a call like that.
 
それなら、Chris D が日本語を学ぶのは確かに挑戦ですが、頑張ってください!翻訳ツールは便利ですが、微妙なニュアンスを掴むにはまだまだですね。ちなみに、日本語では名詞もちゃんと使いますよ! 😉
 
それなら、Chris D が日本語を学ぶのは確かに挑戦ですが、頑張ってください!翻訳ツールは便利ですが、微妙なニュアンスを掴むにはまだまだですね。ちなみに、日本語では名詞もちゃんと使いますよ! 😉
Proving me right. Work on that Japanese!

Screenshot_20241221_020202_Interpreter.webp

I don't even think my wife would understand, and she'd be mad if I woke her up to rewrite it to what she thinks you're trying to say.

Edit: It's not half bad, but as soon as you start adding more people and things to the conversation it gets confusing where you have to often question the subject. The first sentence is wacky though.
 
Last edited:
Is it a good thing or bad thing that I only get one of those references? 🤔
Probably for the best if you only understood the chans.

There's hours of disturbing lore on YT about the fruits that will give you a "whelp, that's enough internet for the day" with just learning 1 of the controversies.
 
While I suspect the EU won't be targeting forums anytime soon, the fact that there's few tools out there to help forums target harmful speech is disheartening. We are rushing into the ago where tools like Google Perspective or OpenAI Moderation are free to use (as long as you're not a huge forum) and yet seem to be greatly underused by forums. While most forums don’t encounter significant problems with hate speech and similar content, we now possess tools that monitor of every post to ensure that such issues don’t go unnoticed. However, unfortunately, these tools are not being utilized effectively.......even if these can be built to simply report issues when they arise.
 
Probably for the best if you only understood the chans.

There's hours of disturbing lore on YT about the fruits that will give you a "whelp, that's enough internet for the day" with just learning 1 of the controversies.
Yep, I know of the chans but the only fruits forum that I can think of at the moment is a well known (within the vintage/retro computing community) site dedicated to 8-bit Commodore machines named after a citrus fruit.

I'll go with it being a good thing that I don't get the fruits reference and, at the same time, think to myself that "I'm really starting to feel my age! 👴". 😆
 
Moderators?

Some of whom aren’t tools.
Human moderators are going to be best utilized to make the finial call as they’re able to reason and make calls that fit in with your forums values or goals.

That said, tools can help bring questionable content to a humans attention instantly thus making their job easier.
 
I think the size of a site matters, though. With c. 25 active users and maybe 1-2 hundred posts on a busy day (and often much less), just relying on reports and human mods should be enough for us. Whereas on a some of the much larger, busier forums I am on as a user, I could see wanting some algorithmic or AI help in monitoring things.
 
Dame Melanie Dawes Ofcom’s Chief Executive said:
For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today.
The safety spotlight is now firmly on tech firms and it’s time for them to act. We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year.

Those that come up short can expect Ofcom to use the full extent of our enforcement powers against them.

This is just the beginning​


Pfff. The regulator certainly does its best to sound alarming.

The Online Safety Act lists over 130 ‘priority offences’, and tech firms must assess and mitigate the risk of these occurring on their platforms. The priority offences can be split into the following categories:
  • Terrorism
  • Harassment, stalking, threats and abuse offences
  • Coercive and controlling behaviour
  • Hate offences
  • Intimate image abuse
  • Extreme pornography
  • Child sexual exploitation and abuse
  • Sexual exploitation of adults
  • Unlawful immigration
  • Human trafficking
  • Fraud and financial offences
  • Proceeds of crime
  • Assisting or encouraging suicide
  • Drugs and psychoactive substances
  • Weapons offences (knives, firearms, and other weapons)
  • Foreign interference
  • Animal welfare
 
I don’t think they’re seeking to stop discussion about it. They’re probably seeking to prevent criminal gangs from coordinating it. The current government is attempting to take action against gangs that arrange unlawful immigration.
 
I don’t think they’re seeking to stop discussion about it. They’re probably seeking to prevent criminal gangs from coordinating it. The current government is attempting to take action against gangs that arrange unlawful immigration.
Right. So the only criminal gang that can import them is the government themselves as there's no immediate money in it if criminals do it.

Can't pass spending bills to help immigrants if you're unable to document the illegal ones that (you help) enter because you won't have a baseline of how little you need. 🤷‍♂️

I'm sure this post will be of questionable content once that bill passes, so you might want to just flag it for removal now. :-P
 
Back
Top Bottom