Facebook Fake News Reporting Not Easy

Jan 11, 2018 by

‘Part of the problem’

While scrolling through the personalized news feeds on Facebook, users are periodically shown information that is false, or at least misleading. A lot of times the fraudulent content comes from friends or certain followed pages, but often the post is listed as “sponsored,” meaning someone paid Facebook to promote and place it on the platform.

Facebook staff are supposed to thoroughly vet the bidder and their advertisement, ensuring that both are legitimate.

However, many slip through the cracks.

Facebook’s oft-used justification for this is that due to the enormous amount of appeals it receives on a daily basis, it can’t always keep up. For a massive company with $27,638,000,000 in revenue and $23,849,000,000 in gross profit in 2016, it’s not clear why resource restraint would be an issue in more robustly reviewing these posts. Since it is sponsored content, of course, Facebook makes money on these posts, causing some analysts to question the sincerity of Facebook’s desire to actually aggressively combat this fraudulent content.

“Current tech leaders might help some, but they are also part of the problem,” Mark Jamison, a visiting scholar at the American Enterprise Institute, wrote in a blog post titled “Will tech firms save us from fake news?”

“The key to combating fake news probably lies in creating an economic engine that is more powerful than the one that drives fake news,” he continued. “Since costs are already minimal, the engine would have to give consumers more value. Sounds like we need a disruptive innovation, which is what new tech businesses are all about.”

When Facebook’s review teams can’t first successfully sift for fraudulent content themselves, they tap the larger Facebook community, where the average user can help police the platform and cleanse it of any misinformation.

But a review conducted by The Daily Caller News Foundation shows that the options provided often vary. Sometimes the ability to report the content is available after one or two clicks. Other times a user has to make multiple selections through often very confusing mechanisms to find the reporting option, begging the question: If Facebook really wanted to delete fraudulent content shouldn’t the capability to report content be easier to accomplish and more prominently featured?

For instance, here is what is offered when using an iPhone or iPad.

There is no option anywhere here to report misleading or fraudulent content. Users may feel that the “Why am I seeing this?” option is the best way to achieve the intended goal.

But it’s not. Users are supposed to know to click “Hide Ad,” which will lead to the question “why.”

Still not being shown any reporting capability, a user is given three potential responses: “It’s not relevant to me,” “I keep seeing this,” and “It’s misleading, offensive or inappropriate.”

People then must correctly assume that the final option is the correct one for reporting, which leads them, finally, to the desired input: “It’s a false news story.”

Options differed for different models of phones like a Google Android powered-Motorola smartphone, among others. Facebook did not fully explain why different options were offered on different occasions after receiving The Daily Caller News Foundation’s inquiry.

Facebook is, of course, a private company, free to do as it chooses without the fear of government punishment at least under current law.

“So, legally, it isn’t wrong for Facebook to profit from fake news on its platform,” Tom Struble, tech policy manager at R Street, told TheDCNF. “Indeed, there are tons of unscrupulous websites and platforms that shamelessly profit from spreading misinformation and fake news (e.g., Holocaust-denying websites that make money by selling ads).”

“Morally, however, it’s at least arguable that Facebook is wrong to profit from misinformation shared on its platform,” he continues.

“Fake news” is certainly not a new problem, but for something as widely used as Facebook, the company seems to agree with critics that it should do more.

Facebook CEO Mark Zuckerberg recently admitted that his company makes “too many errors enforcing our polices and preventing misuse of our tools” and promised to do better in 2018. How genuine his seemingly remorseful pledge is, though, is somewhat dubious considering that many users aren’t easily able to report fraudulent content.

With all of its resources and vast troves of cash on hand, Facebook can surely afford to restructure or fortify its efforts to purge more misinformation from the platform, while still limiting the amount of legitimate content inappropriately taken down. If it doesn’t, the appeal of advertising dollars may be the culprit.

“Perhaps more so than traditional industries, online platforms live or die based on their brand reputation, so I think Facebook has a strong incentive to fix the Fake News problem on its own (just as Google and YouTube had strong incentives to fix their problems),” Struble concludes. “If it doesn’t, it risks losing its users and advertisers to competing platforms (Twitter, Snap, etc.), and/or potentially new legislation from Congress forcing it to implement costly new compliance-monitoring schemes.”



Source: Facebook Fake News Reporting Not Easy | The Daily Caller

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.