Why Let Users Do It?
It’s difficult for a company like Facebook — with millions of posts appearing every single hour on its website — to hire enough staff to scour through all of the content on its site and determine which sources aren’t reputable enough to get a gold star of approval. It would require a massive continuous effort that will never end. So, one of the first solutions to this problem that comes to mind is the use of machine learning and several algorithms to automate the process. This method, though less costly, can backfire because machine learning is still not advanced enough to reliably predict whether a story will report truthful things or not. Perhaps the most successful implementation of “fake news detection” was when Facebook began compiling data from third-party fact checkers to tag certain posts as “disputed”. This, of course, only made users more likely to click on the stories that appeared with a red flag next to them. In 2018, Adam Mosseri — Facebook’s Head of News Feed — announced that the social network is experimenting with surveys, asking the users themselves what they consider trustworthy. Here’s where the problem lies: We don’t know how they selected the sample group for participants in the survey.
Confirmation Bias Of The Majority
There’s an old adage that’s very relevant here: “What is right is not always popular, and what is popular is not always right.” If you can get 299 out of 300 people to agree that Europe doesn’t exist, the continent won’t just disappear because of that. Most people will trust one news source over another, and that source may not be the most reliable, but it certainly resonates with what the majority of the population wishes were true. When telling a story, peer review and empiricism don’t apply. These are just observations made by someone who retells the story (ultimately) from their own perspective, which will be tainted with at least a tiny amount of bias because we’re human beings. We also have to take into consideration that Facebook’s users normally post news stories to prove a point, not necessarily to pass on information mechanically. The point here is that surveying people to determine what is or isn’t fake news might not be the best idea because the only true conclusion we can draw from such data is whether a news source is popular or not.
On The Other Hand…
The purpose of the survey was to determine what users find familiar and trustworthy, and Facebook says that the data will “help to inform ranking in News Feed.” This sounds less like the company is attempting to stop certain news items from appearing on people’s feeds and more like they’re attempting to display a rank based on what most people find trustworthy. A low ranking would show you that most people don’t trust what you’re seeing on your news feed. We still, however, must show some concern for the fact that we have no idea about the selection criteria for the participants of the survey. Facebook only said that the sample was “diverse and representative”. In the end, it is human nature to be subjective and exhibit a bias. No technology or legal system on this planet can put a stop to that, especially since human beings themselves have a hand in creating both. Do you think we would ever find a solution to fake news, or is this just part and parcel of having an internet connection these days? Tell us your thoughts in a comment!