TECHNOLOGY

Can Facebook’s misinformation problem be fixed?

Facebook admits it responded to ‘information operations’ during the 2016 election. Will the site ever rid itself of distorted information?

REUTERS/Dado Ruvic/Illustration/File

REUTERS/Dado Ruvic/Illustration/File

On Thursday, Facebook’s security team released a short report updating its expansion of its focus “from traditional abuse behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.” In other words: so-called ‘fake news’.

In the report’s brief case study on the 2016 U.S. election, Facebook admits that it responded to “several situations” that it assessed “fit the pattern of information operations”, which Facebook defines as “actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.”

Those actors, Facebook says, use what it calls “false amplification” – a “coordinated activity by inauthentic accounts with the intent of manipulating political discussion”. Interestingly, Facebook specifies that most of that false amplification of posts is not due to “bots”, or automated processes, but rather “by coordinated people who are dedicated to operating inauthentic accounts.”

RELATED: Why Canadians should care about Facebook’s fake news problem

“We have observed many actions by fake account operators that could only be performed by people with language skills and a basic knowledge of the political situation in the target countries, suggesting a higher level of coordination and forethought,” Facebook reports “Some of the lower-skilled actors may even provide content guidance and outlines to their false amplifiers, which can give the impression of automation.”

All that said, Facebook also concluded in its study of the 2016 U.S. election that “the reach of the content shared by false amplifiers was marginal compared to the overall volume of civic content shared.”

Facebook’s report landed a day after the current chairman of the U.K. parliamentary select committee on culture, media and sport issued a plea for heightened vigilance during that country’s current election campaign.

Damian Collins told the Guardian that he feels Facebook must step up its efforts to clamp down on so-called ‘fake news’. “The danger is,” Collins said, “if for many people the main source of news is Facebook and if the news they get on Facebook is mostly fake news, they could be voting based on lies.”

Collins lamented that Facebook doesn’t “respond fast enough or at all to some of the user referrals they can get.” If Facebook can quickly see what posts are going viral, “they should then be able to check whether that story is true or not and, if it is fake, blocking it or alerting people to the fact that it is disputed,” Collins told the Guardian. Facebook, he said, can’t rely solely on users. “They have to make a judgment about whether a story is fake or not.”

Since the November election in America, Facebook has begun a global push to try to educate its users on how to interpret the news they see in their News Feed. That effort has taken different forms in different countries.

In the U.S. and Germany, for instance, Facebook has tested a tool that would mark suspect stories as “disputed” after it had been evaluated by outside partners – in the U.S., news organizations like ABC or PolitiFact, and in Germany, a non-profit investigative newsroom called Correctiv. Ahead of the elections in France, Facebook purged 30,000 fake accounts that were spreading fake news stories and spam. Facebook even took out ads in some French newspapers listing its 10 tips for evaluating news.

That was the same list Facebook promoted in Canada earlier this month in an attempt to reach its approximately 22 million users here. However, it is unknown how effective that campaign was. When asked for data relating to visits or time spent on the 10 tips site, Facebook declined to offer any, and instead said that “nearly all of the 22 million Canadians on Facebook were exposed to the PSA [public service announcement] on their Facebook News Feed.”

RELATED: The problem with Facebook’s plan to teach you how to read news

In Canada, Facebook partnered with media literacy group MediaSmarts for the initiative, and MediaSmarts offered further ways to evaluate news on its own site. However, that was also another click for users to make away from their News Feed, thus lowering the possibility that they would see it. A spokesperson from MediaSmarts said the site experienced 4,000 to 5,000 more visitors above what it saw the week prior to the media literacy push – a 15 per cent to 18 per cent bump for that site that was likely attributable to the campaign. MediaSmarts also says it received a “higher than normal” increase in Facebook followers on both its English and French pages, which it also attributes to the PSA.

Those figures – though surely not an entirely accurate reflection of how many Canadians saw the Facebook media tips page – do at least suggest that Collins, the U.K. parliamentary committee chair, might be right to think that relying on users to flag inappropriate material is not enough. Barring some seismic and more or less instant shift in the general public’s interpretation of the news they (even casually) see on their News Feed, the onus to rid the site of misinformation still rests with Facebook – even if that misinformation is “marginal”.

As the company has before, Facebook’s security states that only a coordinated effort in conjunction with governments, civil society programs, and journalists will be capable of tackling the ‘fake news’ problem. As Facebook said when it launched its media literacy drive in Canada, no one thing can be a “silver bullet”.

Chris Jackson/Getty Images

Chris Jackson/Getty Images

Or can it? Is there a way for a company like Facebook to implement real-time detection of these kinds of posts?

Somewhat counter-intuitively, one of the ways Facebook has said its software currently finds suspicious a post is based on lack of shares. In December, Adam Mosseri, Facebook’s VP of News Feed division, explained that Facebook “found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way.” The thinking is that if you click on a story and read it, but don’t share it – and many other people do the same – it might be “problematic content.”

Whether that approach works, this form of detection assumes people are going to click through to a story, rather than simply share the post because of its headline. And, as Mosseri told Recode, “A lot of people share things without reading them.”

Enter the founder of Google News with a possible solution – one that would involve algorithms and humans working together.

In a Medium post published Thursday, Krishna Bharat outlined an approach that might allow Facebook to flag fake stories, such as the one that went viral during the 2016 U.S. election that stated Pope Francis endorsed Donald Trump. What Facebook could do, Bharat says, is catch a “wave” – that is, “a set of articles that make the same new (and possibly erroneous) claim, plus associated social media posts.” Waves, Bharat writes, are significant when they show growing engagement.

RELATED: Google and Facebook pledge to crack down on fake Canadian news

If such a wave were to have the traits of a ‘fake news’ story – for example: the story being shared is political; it contains sources known for spreading misinformation or is being shared by users known for spreading it; or contains no credible news sites – then it would be flagged for a human to look at and “potentially put temporary brakes on it.” Slowing that wave, Bharat argues, would ensure the post does not the desired share threshold, and tip into an even more viral phenomenon.

It’s possible Facebook is already contemplating – or actively using – a technique like this. The company has not not mentioned it publicly if so. And if it were to start a flagging operation like the one Bharat describes, transparency would be key.

“I would expect all news that has been identified as false and slowed down or blocked to be revealed publicly,” he writes. “Google, Facebook and others have transparency reports that disclose censorship and surveillance requests by governments and law enforcement. It’s only appropriate that they too are transparent about actions that limit speech.”

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.