Technology

Until A.I. replaces us, we all work for Facebook

The social media giant may be hiring thousands of people to respond to objectionable content, but its users are still its first line of defence

The sign outside the main entrance to Facebook HQ. Image via Facebook.

The sign outside the main entrance to Facebook HQ. Image via Facebook.

Facebook’s CEO, Mark Zuckerberg, announced Wednesday via a Facebook post – what else? – that his company is looking to add 3,000 humans to monitor live video feeds. Those fresh pairs of eyeballs will join the already 4,500-strong community moderating group Facebook already employs.

The move follows a number of high-profile incidents where crimes and suicides were posted to Facebook, either live or retroactively, spawning numerous negative news stories and uncomfortable questions. The most recent two occurred just last month. In mid-April, a man in Cleveland filmed and uploaded a video of his seemingly-random murder of Robert Godwin, a 74-year old man. (The shooter later committed suicide following a manhunt). The video was accessible on Facebook for three hours following the act. Less than two weeks later, a man in Thailand posted a video showing him killing his 11-year old daughter, and then himself. That video remained posted for roughly 24 hours before it was taken down.

The most obvious question is whether 7,500 content reviewers for a social media site that boasts an average of 1.28 billion daily users is enough. The answer is equally obvious: it’s not. So, we might want to wonder instead about the meaning of those 7,500 people. What does it mean that they are there?

Zuckerberg’s public pronouncements on his company’s sometimes-controversial ability to remove what it deems to be objectionable content suggest that, eventually, he expects computer programs will solve things. Facebook, he wrote Wednesday, is building better tools to “make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”

More advanced programs are also being tested that might anticipate disturbing posts before they’re even reported. In March, Facebook announced that it is testing “streamlined reporting for suicide, assisted by artificial intelligence” in the United States. In a post, Facebook says its new program uses “pattern recognition in posts previously reported for suicide,” and that the A.I. “will make the option to report a post about ‘suicide or self injury’ more prominent for potentially concerning posts.”

At the same time, Facebook said it is also testing pattern recognition to focus on posts that are “very likely to include thoughts of suicide.” If the program notices ones that fit that pattern, the Community Operations team would then review the post and “if appropriate, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet.”

Mark Zuckerberg gives a keynote address the F8 developer conference. Image via Facebook.

Mark Zuckerberg gives a keynote address the F8 developer conference. Image via Facebook.

Still, the thing about Facebook Live is that, well, it’s technically live. When Facebook launched the tool, Zuckerberg wrote that “when you interact live, you feel connected in a more personal way.” This is undoubtedly true, and surely why someone seeing a shooting or a suicide take place in real-time (or even believing that it is) on their mobile phone will likely be more disturbed than if they were watching a recording, simply for the very reason that such a connection has been created. You are there – live, ostensibly.

The idea of moderators the way we usually understand them is that they are the watchers that we trust to see the Internet before we see it. That changes somewhat in this context. It’s not that moderators are missing from Facebook Live. It’s just that they are not hidden somewhere in the background, watching things before we watch them. Instead, for the most part, the moderator is us.

As the news of Facebook’s planned new hires spread, one Vox writer wondered on Twitter, presumably partially in jest, whether these might be “the jobs of the future?” He’s almost right.

The 7,500 or so people who work at Facebook to keep tabs on and respond to reports about the content that is uploaded to the site do fulfill the role of police. They are who you call when something looks wrong. But they are also, for Facebook, the basis on which it can rest an important idea that must exist for its product to be successful: that this is a safe space. What Zuckerberg has yet to make explicit is that, though Facebook can hire and pay thousands of people to check content, until the A.I. to do it is perfected, we are all its employees.

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.