Illustration, Melanie Lambrick.
It seemed like a good idea at the time. When digital engineer Justin Rosenstein helped create the “like” button for Facebook a decade ago, he was hoping to bring a bit of fun and whimsy to the world’s most popular social media platform. Facebook rolled the feature out in 2009 with a post stating: “We’ve just introduced an easy way to tell friends that you like what they’re sharing on Facebook with one easy click.”
What could go wrong? More than Rosenstein could have imagined. He had intended “likes” to inject a little burst of positivity in a users’ day. Instead, those “bright dings of pseudo-pleasure” — his description of those fuzzy feelings we experience with every like — unleashed insatiable cravings for more and more likes, which only encourage advertisers and media companies to create empty, clickbait content for us to “like.” That jaunty thumb’s up, he says, now embodies what’s most broken and dysfunctional about the way we live online.
Rosenstein isn’t a voice in the wilderness. He’s one of a growing number of tech-industry insiders who are renouncing their own creations and speaking out against what they see as the pernicious, destructive effects of the tech platforms they’ve helped build. They are critical of how social media and technology companies are monopolizing our attention spans, undermining civic discourse, and ignoring the mental-health impacts their products have on us — from FOMO to chronic anxiety to outright addiction. They’re not pulling their punches, either. Former Facebook employee Chamath Palihapitiya, who once led the company’s user-growth strategy, told an audience at the Stanford School of Business last year: “we have created tools that are ripping apart the social fabric of how society works.”
This chorus of digital naysayers has been steadily growing over the past several years, adding more and more disenchanted Silicon Valley exiles to its ranks. They want to tear down what they see as the addictive, destructive, brain-hijacking elements of information tech and social media, and rebuild them according to principles of ethical design — in essence, making sure that technology, and technology companies, serve us, and not the other way around.
For an industry that long operated under the maxim “move fast and break things” (Facebook’s official motto until 2014), the mea culpas are a dramatic turnaround. Just a few years ago, social media was seen as a revolutionary medium that could connect people of all backgrounds, and was even touted as the midwife of the Arab Spring revolutions. The worst criticism levied against it was that it was frivolous, a platform for sharing your lunch with the world. Rumblings about the darker implications of tech were mostly ignored amid the hype, IPOs, and skyrocketing stock prices.
But what was supposed to be an idyllic global village of instant news and social connection that would tear down borders and prejudice, has turned out to be a bit of a hell-scape. A scroll through Twitter can be a rage-inducing experience, Facebook is full of bickering and checking Instagram is an exercise in envy. With fake news, privacy breaches, and armies of online trolls and bullies, the internet, quite often, is a highly unpleasant place to be.
But the design ethicists aren’t luddites — the promise of the early internet, most believe, is still there, waiting to be unearthed. “It’s too easy to say social media is making us miserable, so let’s all get off it,” says David Ryan Polgar, one of the leading authorities on design ethics. A lawyer and tech ethicist, Polgar founded All Tech is Human, an ethics discussion forum that hosted its first event this March at New York’s Grand Central Tech. “The internet was originally conceived in these utopian ideals,” he says. “But we’ve gotten way off base.”
Working in the heart of Silicon Valley for more than a decade, San Francisco psychotherapist Nicolle Zapien has seen the effects of this dystopia first-hand: a steadily increasing number of clients who are more disconnected, more depressed and more anxious than ever. “A client at 9 a.m. is talking about how he has a million Facebook friends, but he never sees anyone. Another masturbates all day to porn and can’t shut it off. Another is enraged by someone’s social media behaviour and can’t stop thinking about it,” she says, adding there’s no formal, research-based training or background to deal with the explosion in tech-related malaise.
What’s causing all this are two major problems with social media. The first is the web’s basic business model: because social media platforms make almost all of their money from advertising, they’re designed to keep us glued to our screens as long as possible, seeing as many ads as possible. One of the best-known design ethicists is Tristan Harris, a former Google employee who this year co-founded the Center for Humane Technology. Harris likens smartphones to tiny slot machines we carry around all day long, compulsively checking. “All tech designers need to do is link a user’s action (like pulling a lever) with a variable reward,” wrote Harris in a blog post in 2016. “You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”
Which is why so many of us, if we have more than a few seconds of empty time, reflexively fill it by checking our inbox or swiping over to Facebook to see if anyone liked that clever post — that bright ding of pseudo-pleasure again, the dopamine rush of the gambling addict hitting the jackpot. And once we’re in, the platforms keep us there: YouTube auto-plays videos as soon as the last one ends, encouraging endless watching. Snapchat creates “snapstreaks,” rewarding users who stay on the platform day in and day out. And almost every platform features an endless scroll, filled with sponsored posts and ads, tailored to what its algorithms have determined a user will respond to.
The second problem is the corrupting effect of social media on politics and social discourse. Basically, social media makes us mean, providing a forum for the wildfire spread of misinformation and rumour, as well bullying, public shaming and ultra-polarized political mudslinging. “Social media doesn’t allow enough time for empathy or normal social feedback loops to happen,” explains Zapien. “If I say something in person, I can see how it affects them immediately.” Online, however, “we see an acceleration of negative and positive encounters to extremes.”
Attempts to study these effects are difficult, in part because it’s tricky to separate cause and effect — does constant connection make us anxious or addicted? Or do people get hooked on social media because they’re already anxious or bored? Most of the research looking at the intersection of tech and mental health isn’t quite that clear cut (yet), but there’s an awful lot of very suggestive data out there.
A study in the American Journal of Preventive Medicine published last year looked at 1,787 Americans aged 19 to 32, and found that those with higher social media use felt more socially isolated. A 2014 study in the Journal of Social and Clinical Psychology also found a link between Facebook use and depressive symptoms.
And in 2016, Michael Van Ameringen, a professor in the Department of Psychiatry and Behavioural Neuroscience at Hamilton’s McMaster University, led a study on internet use among 254 undergraduate students. He found that 40 per cent of them met the criteria for problematic internet use. Some were spending more than six hours of non-essential time online every day, and reported “higher rates of ADHD symptoms; they also reported impaired functioning at work, school or social life.”
Van Ameringen believes that if the tech industry wants to get serious about mental health, it needs to work closely with doctors and mental health professionals to set design parameters, rethink products, and, as long as they’re monitoring users closely, track usage for signs of addiction or other problems. “We need to see usage patterns, we need to have input,” he says. “Do we really believe [tech] are going to be altruistic enough to do this on their own?”
Illustration, Melanie Lambrick.Fifteen years ago, the web more closely resembled the free-for-all of the techno-utopians’ dreams, with scrappy websites popping up left and right, and big companies only taking tentative forays online. Today, the world’s most popular websites — Google, Facebook, YouTube, Netflix, Twitter — are also among the world’s biggest companies. Google and Facebook, and the companies they own, eat up a whopping 80 percent of advertising dollars across the entire internet.
As the web is monopolized by a handful of companies, their products’ designs — and flaws — dictate the way billions of people interact every day. And because of their ubiquity, there are no real alternatives. In all likelihood, your entire social media history, email history, work correspondences, contacts, photos and treasured memories are drifting in a data cloud controlled by a handful of the most powerful companies the world has ever seen.
But some in the design-ethics community believe that if there’s a sufficient consumer demand for a healthier internet, things can change — those monopolies aren’t invincible.
“There’s already a market for a healthier internet,” says Zapien, “and it’s just starting to be recognized. Will it be Facebook and Twitter to create it, or someone altogether new that doesn’t even exist yet?” A healthier internet will be one whose financial model will not be predicated so thoroughly on advertising revenue and users’ eyeballs. It will be quicker to crack down on hate speech and threats. It will foster nuanced interaction rather than pitched virtual shouting matches.
To that end, Zapien and the California Institute of Integral Studies partnered last year with former Google product manager Stephen Cognetta on two hackathons. One was intended to create new apps focused on mental health, and another deconstructed the negative effects of existing platforms. The winning entry in the former was “Emotion-Ally”, a web-browser plug-in that interprets tone and emotion in written content — whether email, social media, or other — and offer suggestions before you go dashing off a spiteful tweet or hasty email.
Neither Zapien or Cognetta imagine that their plug-in will change the world. But, says Zapien, these kinds of projects are a first step that could lead to the creation of all-new companies and platforms that directly challenge the old guard — or help make ethical design so desirable, and so demanded by users, that the biggest companies have to make it more than a PR exercise.
But so far, despite user backlash, the tech world’s giants aren’t suffering too much. In the past two years, Facebook became the platform of choice for the fake-news propagandists working to elect Donald Trump; was implicated in the Cambridge Analytica scandal, after the British political-consulting firm mined private data on 87 million users; and was targeted by the #deletefacebook movement, a retaliatory strike by users angry that their data wasn’t as private as they’d thought.
But none of that stopped the company’s user base from reaching all-time highs this year, with 1.47 billion “daily active users” globally as of this June — meaning that one-fifth of living humans use Facebook at least daily. The sundry scandals did create a little financial friction, however. In July, Facebook’s shares nosedived after its second-quarter earnings report showed the company missing revenue expectations, bringing in only $13.23 billion, as opposed to the $13.36 billion expected. Regardless, it was still the highest one-quarter revenue in the company’s history.
So Zuckerberg probably didn’t blink in July when Britain’s Information Commissioner’s Office fined the company £500,000 ($871,000) for its role in the Cambridge Analytica scandal, finding that it failed to properly handle users’ personal data. The fine came to about 10 minutes’ worth of revenues—they’ll make it up by the time you finish reading this story.
On the other hand, hubris goes before a fall, and facing down last year’s endless string of scandals, Mark Zuckerberg struck a humbled pose, writing, “One of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent… The research shows that when we use social media to connect with people we care about, it can be good for our well-being.” The post accompanied changes to Facebook’s feed algorithm that prioritized posts by individuals (i.e., friends and family) and reduced content by brands and publishers.
Zuckerberg lifted “time well spent” from the Center for Humane Technology’s Tristan Harris, and had even used it a few months earlier. Harris at the time accused him of co-opting the term, and said that it was simply dishonest, unless Facebook abandons advertising as its main revenue stream. (The algorithm changes also meant that brands and publishers would likely have to pay for more sponsored posts and ads in order to reach the same audiences as they used to, adding another meaning to the phrase.)
But others, including Polgar, were cautiously encouraged. “At least the argument is making it through,” he says.
So far, the modest steps Silicon Valley has taken reflect a very Silicon Valley approach: that tech can be optimized to solve the problems that tech created. The next Apple operating system, for example, will include “Screen Time,” to track how long you spend on devices and individual apps, and let you set time limits — so you could shut down social media right before bed, for example. Instagram is working on a feature to be called “you’re all caught up,” which will appear when users have scrolled through every post published by everyone they follow in the past 48 hours.
Cognetta, for all his caution, is fundamentally a techno-optimist. Until he left Google to focus on mental-health outreach, he worked on Google Doodles — the whimsical animations that replace the company’s logo on its search-engine homepage to celebrate holidays and commemorate major events. He believes the seeds of change have already been planted in the industry, and points to his old job as an example.
“Google Doodles doesn’t need to exist,” he says, “but Google cares about user delight, and making people feel welcome.” He acknowledges that it’s also a PR exercise that helps people feel warmer and fuzzier about a $132-billion corporate behemoth. But if the company can deploy its resources on a non-essential feature that does nothing except make people crack a smile, he says, that’s not nothing.
Ask him how technology can be used to improve mental health, and he might point at an old-fashioned telephone.
Shortly after starting his job with Google, he decided to do something to break outside of the claustrophobic tech-world bubble in San Francisco — so he volunteered at a suicide hotline. “It was formative in terms of showing me a part of the world that wasn’t at all apparent from working at a big tech company,” he says. “All you see there are the numbers of users, but not the users themselves.”
Working at the hotline, he says, showed him the power of a simple technology to save lives. “I don’t think technology is bad,” he says. “And I really hope that this is a turning point, where we realize how to design for principle, as well as profit.”
Subscribe to our newsletters for our very best stories, recipes, style and shopping tips, horoscopes and special offers.