• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

EU court: pay damages for ineffective Post Report moderation system.

Alfa1

Well-known member
#1
Please read this EU court ruling: http://hudoc.echr.coe.int/webservices/content/pdf/001-155105?TID=qowwttwprb

Here is a rundown:
  • It concerns Delfi, a big board which received 10,000 new posts per day. 20 abuse posts were posted to a thread but were not noticed by staff. The type of posts are similar to what every forum encounters now and then. (flaming, hate speech, insult) Delfi immediately removed abusive posts after these were reported.
  • The posts were visible on the site for 6 weeks and therefore the website is responsible for the damage to reputation.
  • The site owner argued that free speech applies, but the EU court ruled that this is not the case for hate speech, flaming, insult and similar abuse.
  • EU court specifically mentions that the website should have implemented a more effective system than the post report system, so that abusive posts are discovered and deleted quickly.
  • Delfi has to pay for damages.

This court ruling applies to the whole of the EU.

I think that this ruling has to be seen in the light that large sites like Facebook, wikipedia, etc have crowd moderation, filters, mechanisms and other proactive moderation tools in place. Most forum software and CMS commenting systems still have moderation tools that use the same basics as a decade ago.

How do you think XenForo can adapt to this change in law?
 

Alfa1

Well-known member
#5
Mind that the posts were removed when they were reported. It took 6 weeks before that happened.
In my experience many people will not actively report on others.
 

Chris D

XenForo developer
Staff member
#7
This is where having positive ratings and negative ratings is useful in the sense that people are more likely to use that function more than the report function.

Being able to vote a comment down could be linked to a system that would automatically report or moderate that comment as soon as a threshold is reached.

That said, if you have a bunch of horrible people who up vote nasty comments which should be down voted then it's not completely fail safe.
 

Mike

XenForo developer
Staff member
#8
It's worth pointing to this article on what happened: http://arstechnica.co.uk/tech-polic...ision-websites-are-liable-for-users-comments/

I don't think the implications of the ruling are as clear cut as laid out in the original post. This ruling was by the European Court of Human Rights, and was related to a specific Estonian law. While clearly referencing the original case, this specifically was between Delfi and Estonia and related to whether the Estonian law violated ECHR doctrine.

While the ruling may form part of an opinion that can have influence over future cases, I don't think it's totally clear cut. Often, precedent for cases ends up being far more narrow than it may first appear.
 

Alfa1

Well-known member
#9
An interesting quote from the court ruling:
The court found that the applicant company itself was to be considered the publisher of the comments, and it could not avoid responsibility by publishing a disclaimer stating that it was not liable for the content of the comments.
 

Zynektic

Well-known member
#10
@Chris D Possibly add in some form of word blacklisting which if the post is reported gives it a higher negativity or moved to a specific system for moderation?

Eh... can be worded better but hope that makes sense.
 

Mike

XenForo developer
Staff member
#11
Perhaps more significantly from the ruling:
116. Accordingly, the case does not concern other fora on the Internet where third-party comments can be disseminated, for example an Internet discussion forum or a bulletin board where users can freely set out their ideas on any topics without the discussion being channelled by any input from the forum’s manager; or a social media platform where the platform provider does not offer any content and where the content provider may be a private person running the website or a blog as a hobby
 

Alfa1

Well-known member
#12
While the ruling may form part of an opinion that can have influence over future cases, I don't think it's totally clear cut. Often, precedent for cases ends up being far more narrow than it may first appear.
True, though this case unfortunately does not stand alone. There have been similar cases in Germany where admins were held responsible for content.

Recently many EU countries have changed their laws in regards to hate speech. Mainly because of the terrorism and extremism related events in Europe and IS territory. This directly changes and limits free speech on forums.
 

Alfa1

Well-known member
#13
Perhaps more significantly from the ruling:
Thanks that is very significant. At least that rules out forums who are not registered as company/foundation and forums who do not write their own content and who's staff do not intervene in discussions.

However this still seems to pose a problem for communities who's staff intervenes in discussions, or sites that do write their own articles, or contribute their own content or who are registered as a foundation or company.
 

Fred.

Well-known member
#14
It's getting scary sometimes, I already had a newspaper from the Netherlands threatening me to go to court because of content that was posted on one of my forums.
 
#15
This is where having positive ratings and negative ratings is useful in the sense that people are more likely to use that function more than the report function. Being able to vote a comment down could be linked to a system that would automatically report or moderate that comment as soon as a threshold is reached.
@Chris D Could you recommend the best addon that would handle this vote up - vote down thread and comment system. Sounds like something I could use.
 

Alfa1

Well-known member
#17
The reason behind the exclusion seems to be that the publisher is held responsible for hate-speech. Publishing hate speech is illegal in many countries. The court decided that Delfi is a publisher because:
  1. Delfi actively moderates its posts and its 5 moderators had previously deleted content. Its services are not just technical in nature.
  2. Delfi is a registered entity.
  3. Delfi staff does posts content.
  4. Delfi gets income from ads.
  5. Users/guests could not delete or edit their posts and therefore had no control over them once published.
  6. Delfi had set rules to the commenting including that the posts needed to be on topic.
  7. Delfi invited users to post with a 'post comment' button and by advertising the number of posts with every article/thread. Thus Delfi sought to attract more posts.
  8. Publishing of news and comments on a portal is a journalistic activity.
As a publisher or content provider Delfi was liable because:
  1. Delfi had a obligation to take effective measures to limit the dissemination of hate speech and speech inciting violence.
  2. Delfi should have been aware of insults, threats and hateful posts. Not being aware of such unlawful posts for 6 weeks almost amounts to wilful ignorance.
  3. Delfi should have deleted the posts on their own initiative and quickly after being publishing.
  4. Their censor(word filter) function and post report function were insufficient. Delfi should have implemented effective mechanisms to discover and delete the posts without delay. "The Court notes that as a consequence of this failure of the filtering mechanism, such clearly unlawful comments remained online for six weeks"
  5. Delfi should have taken proper care of the risk that racist speech would be posted.
 

Alfa1

Well-known member
#19
Yeah, post reports are insufficient. It needs to be something that is more effective.
Probably a post rating that allows users to mark a post as hateful or insulting. People use that plentiful. After which the post is sent to moderation queue or deleted. Similar to your crowd moderation addon, but based on post ratings / down voting.
 

feldon30

Well-known member
#20
Perhaps more significantly from the ruling:
116. Accordingly, the case does not concern other fora on the Internet where third-party comments can be disseminated, for example an Internet discussion forum or a bulletin board where users can freely set out their ideas on any topics without the discussion being channelled by any input from the forum’s manager; or a social media platform where the platform provider does not offer any content and where the content provider may be a private person running the website or a blog as a hobby
I've read it twice and despite their protestations to the contrary, nothing in the rulings would exculpate a forum owner from the same situation. The rulings shoulder website owners that allow user comments with unsustainable costs and expose them to expensive litigation.
The reason behind the exclusion seems to be that the publisher is held responsible for hate-speech. Publishing hate speech is illegal in many countries. The court decided that Delfi is a publisher because:
  1. Delfi actively moderates its posts and its 5 moderators had previously deleted content. Its services are not just technical in nature.
  2. Delfi is a registered entity.
  3. Delfi staff does posts content.
  4. Delfi gets income from ads.
  5. Users/guests could not delete or edit their posts and therefore had no control over them once published.
  6. Delfi had set rules to the commenting including that the posts needed to be on topic.
  7. Delfi invited users to post with a 'post comment' button and by advertising the number of posts with every article/thread. Thus Delfi sought to attract more posts.
  8. Publishing of news and comments on a portal is a journalistic activity.
So any forum/website that meets those 8 criteria is legally responsible for user-posted content? Most forums meet conditions 1-4, 6, and 7. Some meet 8. The only outlier there is #5, disallowing users from editing or deleting their own posts. The court may think they are drawing a bright line that separates forums and other social media sites from Delfi, but they have done the opposite.
As a publisher or content provider Delfi was liable because:
  1. Delfi had a obligation to take effective measures to limit the dissemination of hate speech and speech inciting violence.
  2. Delfi should have been aware of insults, threats and hateful posts. Not being aware of such unlawful posts for 6 weeks almost amounts to wilful ignorance.
  3. Delfi should have deleted the posts on their own initiative and quickly after being publishing.
  4. Their censor(word filter) function and post report function were insufficient. Delfi should have implemented effective mechanisms to discover and delete the posts without delay. "The Court notes that as a consequence of this failure of the filtering mechanism, such clearly unlawful comments remained online for six weeks"
  5. Delfi should have taken proper care of the risk that racist speech would be posted.
1. Is there any technology that filters out all hate speech and speech inviting violence which cannot be easily circumvented?
2 & 3 & 5. The court seems to think that user-generated content sites must hire an army of staff to READ all comments and user-generated posts to ensure that nothing illegal is taking place. This is the China Firewall system. It would shoulder websites with an unbearable responsibility with costs far in excess of any revenue gain from allowing comments. If enforced, most EU-based websites will immediately remove Commenting functionality so as to avoid being a target. Typical of EU overreach, the justices did not consider whether their demands are realistic or would bankrupt a company if enforced.
4. Again, show me which technology is bulletproof or even 95% effective at removing comments in multiple languages that are undesirable by a third party?