Mark Zuckerberg used nearly 6,000 words to describe the future of Facebook Thursday, but you could sum it up in two: global domination.
Sure, Facebook’s CEO appears more “woke” than ever. He meditates on substantive issues like inclusivity, the eradication of disease, responsible artificial intelligence and the future of media.
And yet. In the simplest terms, his manifesto is about how the social network will continue to be a relevant online product as more of the world becomes connected. It explores how Facebook can become a key part of global “infrastructure,” to borrow a word Zuckerberg uses literally 24 times, that will make it an indispensable part of daily life for people across the planet.
Facebook exists to grow and to make money. It treats expansion as a merit unto itself, as if there is some inherent quality to people being on Facebook that betters society.
Consider how Zuckerberg grapples in his manifesto with the idea of disturbing content.
“The guiding principles are that the Community Standards should reflect the cultural norms of our community, that each person should see as little objectionable content as possible, and each person should be able to share what they want while being told they cannot share something as little as possible,” he writes.
There’s a leap there—that someone seeing “objectionable content” is in effect a “bad” thing that should be avoided at all costs. You might think Zuckerberg is referring to extremely disturbing content, like child pornography or videos of suicide, content that no one would argue should be on Facebook — but he is not. Rather, it calls to mind a report from November suggesting Facebook would be open to news censorship to break into the Chinese marketplace.
“Even within a given culture, we have different opinions on what we want to see and what is objectionable,” he writes. “I may be okay with more politically charged speech but not want to see anything sexually suggestive, while you may be okay with nudity but not want to see offensive speech.”
Zuckerberg doesn’t grapple in the manifesto with the idea that things that are disturbing could be important to see, perhaps because of the fact that they’re “objectionable.”
Furthermore, his idea about solving this “problem” should raise eyebrows. Emphasis ours:
The approach is to combine creating a large-scale democratic process to determine standards with AI to help enforce them.
The idea is to give everyone in the community options for how they would like to set the content policy for themselves. Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings. We will periodically ask you these questions to increase participation and so you don’t need to dig around to find them. . Of course you will always be free to update your personal settings anytime.
Let’s put this another way: In Zuckerberg’s idealized, and likely upcoming, version of Facebook, the default option for what is “appropriate” in your News Feed will be determined by groupthink that is specific to your area. The manifesto isn’t overly specific, of course: Regions could be a town, city, country, continent or national park for all we know. The devil will be in the details of how this is rolled out.
But you can see the trouble already: Even as Zuckerberg concedes in his note that Facebook has a “filter bubble” problem, he outlines a system that delivers content according to a moral standard set by a majority of people. Godspeed if you find yourself in a minority of people interested in “politically charged speech” about abortion in Forsyth County, Georgia. Check those News Feed settings, folks!
This definitely isn’t going to pop anyone’s Facebook bubble.
It’s the exact type of unprincipled thinking that has ruined Facebook in the past. Rather than take a meaningful stance in favor of the free spread of information, Zuckerberg, as ever before, walks a middle course that serves Facebook’s aims—to be a happy place for all people, thus ensuring its user base can grow without provoking the ire of tyrants or censors. Individuals are not served by this thinking; they’re limited by it, because by default, they won’t engage with news or content that unsettles.
And we get it: Facebook is a business, it can do whatever it wants, and of course its major incentive is to grow and be all things to all people. The concern comes when Zuckerberg intertwines these motives with something ideological, because Facebook has frequently been a threatening force in the world.
Remember when it allowed hoaxes and propaganda to spread uninhibited in the lead-up to the election of Donald Trump? When the company tried and failed to become a dominant internet service provider in India? When it removed a line from this very manifesto suggesting it could use AI to monitor private communications and profile people? Or when it allowed advertisers to discriminate on the basis of race?
And how does Zuckerberg presume to know which approach will work best for everyone on this planet when 71 percent of his company’s senior leadership is white and 73 percent male?