Mark Zuckerberg must find a “Goldilocks zone” where the privacy protections are calibrated by the state of digital maturity and levels of digital trust — and are “just right.”
Mark Zuckerberg must find a “Goldilocks zone” where the privacy protections are calibrated by the state of digital maturity and levels of digital trust — and are “just right.”“If you had asked me, when I got started with Facebook, if one of the central things I’d need to work on now is preventing governments from interfering in each other’s elections, there’s no way I thought that’s what I’d be doing if we talked in 2004 in my dorm room.”
His sentence construction may be unwieldy, but Facebook’s founder and CEO, Mark Zuckerberg, has been doing a lot of — belated — apologising for the latest instance of an egg on his face. Data on 50 million users had found their way from Facebook to a shadowy research outfit, Global Science Research, and then on to Cambridge Analytica, a data-mining and political consulting firm launched by Steve Bannon, the former White House advisor. Cambridge Analytica is a player of some consequence; it claims credit for tilting the field in favour of the winners in a wide swath of elections — from Donald Trump in the US in 2016 to Nitish Kumar in his landslide win in Bihar over Lalu Prasad Yadav back in 2010.
Zuckerberg has confidently promised to fix Facebook, but it will be far from easy. The ultimate social network is a victim of its own success and eye-watering revenues. When Facebook’s 2 billion users around the planet log in every month and share or swipe past some slice of the human condition as offered up by friends, family and others, the users and their contexts are bound to vary widely. To get a sense of the spread of contexts that Facebook must straddle, consider the two most important markets for the company: India, which has the largest number of Facebook users and is among its fastest-growing markets; the home market of the US. Now, add Brazil and Indonesia as the next two markets behind these two dominant ones. To manage a social network spanning this much disparity of socio-political contexts and levels of digital trust would call for Zuckerberg to re-enroll at Harvard and get a degree in what I might call “digital anthropology”.
Unfortunately, the education of Mark Zuckerberg — who famously dropped out of Harvard to give the world Facebook — is happening in real time. The data on users is valuable to those who wish to tailor messages for advertising, for political messaging or propaganda. It is tempting to keep the engine that delivered $40 billion in revenue last year humming — keep the eyeballs coming, keep the app developers and partners motivated and keep analysing the data to generate more advertising and affiliated revenue. However, as the Cambridge Analytica breach of trust, coming on the heels of Russian interference and “fake news” reveal, it is easy for the temptation to get out of hand. Where should Facebook draw the line? Which market norms should it use to decide on how much data privacy rules to maintain? How does it restrain its harvesting of user data to put an end to this extended backlash, without sacrificing its revenue model? These questions may not be that easy to answer. By applying the same standards across all markets its revenue engine could stall. Privacy expectations, reliance on news delivered by social media and levels of vulnerability are different across Facebook’s important markets. Let’s consider each in turn.
To get a sense of these differences, consider the reality that levels of digital trust vary around the world, particularly as one looks across Facebook’s top 10 markets. In a recent survey by the Pew Research Center, only 11 per cent of Americans said they were “very or somewhat confident” that social media or digital video sites were capable of keeping their data secure and private. The US is one of the countries that we identified in our Digital Planet research at The Fletcher School as having a “digital trust deficit”, where users are less trusting even while the digital environment is relatively trustworthy. We also found that for the Asian countries in Facebook’s global top 10, India, Indonesia, Philippines, Vietnam, Thailand, along with Turkey, have a “digital trust surplus” — users are more trusting even while their digital environments are relatively less trustworthy. Users in these latter countries will be more forgiving.
Before the Cambridge Analytica crisis erupted, calls had already gone out for Facebook to take responsibility for the credibility of news shared on its platform. Here, again, is a dilemma. How far should Facebook go to certify or gauge credentials? A uniform standard may be hard for Facebook to stomach, since the way in which people get their news varies widely across demographic strata and across geographies. Data from the Pew Spring 2017 Global Attitudes Survey suggests that in Facebook’s top 10 markets, some important countries, such as India and Indonesia, have less than 20 per cent of users relying on social media for news, whereas in countries such as Brazil, Turkey, US and Vietnam, that percentage is almost 40 or over. This suggests that it is far more essential to address the fake news problem in the latter group of countries and less so in the former.
Our Digital Planet research initiative reveals that countries that are digitally less advanced, but moving up the curve quickly, are also those where users behave in a more trusting manner. Many of these countries, India included, are the biggest growth opportunities for Facebook. They are experiencing the fastest growth, digital growth is primarily on mobile devices and many have large populations — all of which are critical for a company predicated on an advertising-based business model. Facebook’s Free Basics programme was booted out of India, but is in 63 developing countries and municipalities, each with citizens new to the digital economy and potentially vulnerable to manipulation. The incentive for Facebook would be to keep data privacy rules as lax as possible to give these growing markets the greatest chance of delivering revenues.
On balance, Zuckerberg is caught between two hard realities. Too few constraints will lead to new Cambridge Analyticas to come tumbling out of the closet. Too many could kill Facebook’s business model since every constraint on how data is used means lost revenue. Mark Zuckerberg must find a “Goldilocks zone” where the privacy protections are calibrated by the state of digital maturity and levels of digital trust — and are “just right.” This will be hard to do. This time, the egg on Facebook’s face may stick.