Alpha1
Well-known member
Facebook has long been at the centre of all the controversy surrounding the spread of fake news ever since the U.S. Presidential Election. Especially after the Cambridge Analytica privacy controversy, the social network giant has been under fire.
So far, Facebook has implemented multiple measures that help users detect fake news on the platform, and also reduce the spread of fake news. The company is now taking things a bit further by rating users on trustworthiness. Yes, Facebook is assigning users with a reputation score to better fight fake news on its platform.
Exactly how the new reputation score is being used is unclear.
Facebook says it measures a bunch of different factors to determine a user’s reputation score, though it’ll specifically monitor things like what publishers on Facebook are considered trustworthy by a user, what kind of posts they flag as false, etc. The point of the reputation score is to help understand the trustworthiness of a user so that the company can use it to better fight fake news.
Facebook already allows users to report fake news on the platform, but the problem was that users who didn’t agree with certain articles or publishers were falsely reporting those as fake news. That way, Facebook’s fact checkers are wasting a ton of time reviewing reports from users that are simply reporting content because they disagree with it. The new reputation could be used to help prevent that, as Facebook will now give more importance to reports from users with a higher trustworthiness than ones with much lower trustworthiness.
Full story here: https://www.thurrott.com/cloud/soci...iving-users-reputation-score-tackle-fake-news
This is very interesting though, because I actually have related functionality in my project queue to score the validity of user reports and post reviews.
Simply because it saves a lot of work if you know the validity of a report and post review. Reports and reviews by some users have a accuracy rate of over 95% which basically means that their reviews do not need to be moderated and most of their reports can be automated.
It also means that these users are moderator prospects or could fulfil minor moderation duties.
Conversely there are users which reports and reviews always need to be scrutinized or even blocked.
I think there is a lot of use for this. Its basically weighing of reports by users, where the report by one user has more weight than the report of another. Hence you can assign automatic actions ranging from putting content/accounts in moderation queue, to moving content or even deleting content.
So far, Facebook has implemented multiple measures that help users detect fake news on the platform, and also reduce the spread of fake news. The company is now taking things a bit further by rating users on trustworthiness. Yes, Facebook is assigning users with a reputation score to better fight fake news on its platform.
Exactly how the new reputation score is being used is unclear.
Facebook says it measures a bunch of different factors to determine a user’s reputation score, though it’ll specifically monitor things like what publishers on Facebook are considered trustworthy by a user, what kind of posts they flag as false, etc. The point of the reputation score is to help understand the trustworthiness of a user so that the company can use it to better fight fake news.
Facebook already allows users to report fake news on the platform, but the problem was that users who didn’t agree with certain articles or publishers were falsely reporting those as fake news. That way, Facebook’s fact checkers are wasting a ton of time reviewing reports from users that are simply reporting content because they disagree with it. The new reputation could be used to help prevent that, as Facebook will now give more importance to reports from users with a higher trustworthiness than ones with much lower trustworthiness.
Full story here: https://www.thurrott.com/cloud/soci...iving-users-reputation-score-tackle-fake-news
This is very interesting though, because I actually have related functionality in my project queue to score the validity of user reports and post reviews.
Simply because it saves a lot of work if you know the validity of a report and post review. Reports and reviews by some users have a accuracy rate of over 95% which basically means that their reviews do not need to be moderated and most of their reports can be automated.
It also means that these users are moderator prospects or could fulfil minor moderation duties.
Conversely there are users which reports and reviews always need to be scrutinized or even blocked.
I think there is a lot of use for this. Its basically weighing of reports by users, where the report by one user has more weight than the report of another. Hence you can assign automatic actions ranging from putting content/accounts in moderation queue, to moving content or even deleting content.