WE DIDN’T SIGN UP FOR THIS
I remember when I first encountered Facebook: It was the spring of 2004; I was a senior in college and began to notice an increasing number of my friends talk about a website called thefacebook.com. The first person to show me an actual Facebook profile was Julie, who was then my girlfriend, and now my wife.
“My memory of it was that it was a novelty,” she told me recently. “It had been sold to us as a virtual version of our printed freshman directory, something we could use to look up the boyfriends or girlfriends of people we knew.”
The key word in this memory is novelty . Facebook didn’t arrive in our world with a promise to radically transform the rhythms of our social and civic lives; it was just one diversion among many. In the spring of 2004, the people I knew who signed up for thefacebook.com were almost certainly spending significantly more time playing Snood (a Tetris-style puzzle game that was inexplicably popular) than they were tweaking their profiles or poking their virtual friends.
“It was interesting,” Julie summarized, “but it certainly didn’t seem like this was something on which we would spend any real amount of time.”
Three years later, Apple released the iPhone, sparking the mobile revolution. What many forget, however, was that the original “revolution” promised by this device was also much more modest than the impact it eventually created. In our current moment, smartphones have reshaped people’s experience of the world by providing an always-present connection to a humming matrix of chatter and distraction. In January 2007, when Steve Jobs revealed the iPhone during his famous Macworld keynote, the vision was much less grandiose.
One of the major selling points of the original iPhone was that it integrated your iPod with your cell phone, preventing you from having to carry around two separate devices in your pockets. (This is certainly how I remember thinking about the iPhone’s benefits when it was first announced.) Accordingly, when Jobs demonstrated an iPhone onstage during his keynote address, he spent the first eight minutes of the demo walking through its media features, concluding: “ It’s the best iPod we’ve ever made!”
Another major selling point of the device when it launched was the many ways in which it improved the experience of making phone calls. It was big news at the time that Apple forced AT&T to open its voicemail system to enable a better interface for the iPhone. Onstage, Jobs was also clearly enamored of the simplicity with which you could scroll through phone numbers, and the fact that the dial pad appeared on the screen instead of requiring permanent plastic buttons.
“ The killer app is making calls,” Jobs exclaimed to applause during his keynote. It’s not until thirty-three minutes into that famed presentation that he gets around to highlighting features like improved text messaging and mobile internet access that dominate the way we now use these devices.
To confirm that this limited vision was not some quirk of Jobs’s keynote script, I spoke with Andy Grignon, who was one of the original iPhone team members. “ This was supposed to be an iPod that made phone calls,” he confirmed. “Our core mission was playing music and making phone calls.” As Grignon then explained to me, Steve Jobs was initially dismissive of the idea that the iPhone would become more of a general-purpose mobile computer running a variety of different third-party applications. “The second we allow some knucklehead programmer to write some code that crashes it,” Jobs once told Grignon, “that will be when they want to call 911.”
When the iPhone first shipped in 2007, there was no App Store, no social media notifications, no quick snapping of photos to Instagram, no reason to surreptitiously glance down a dozen times during a dinner—and this was absolutely fine with Steve Jobs, and the millions who bought their first smartphone during this period. As with the early Facebook adopters, few predicted how much our relationship with this shiny new tool would mutate in the years that followed.
It’s widely accepted that new technologies such as social media and smartphones massively changed how we live in the twenty-first century. There are many ways to portray this change. I think the social critic Laurence Scott does so quite effectively when he describes the modern hyper-connected existence as one in which “ a moment can feel strangely flat if it exists solely in itself.”
The point of the above observations, however, is to emphasize what many also forget, which is that these changes, in addition to being massive and transformational, were also unexpected and unplanned. A college senior who set up an account on thefacebook.com in 2004 to look up classmates probably didn’t predict that the average modern user would spend around two hours per day on social media and related messaging services, with close to half that time dedicated to Facebook’s products alone. Similarly, a first adopter who picked up an iPhone in 2007 for the music features would be less enthusiastic if told that within a decade he could expect to compulsively check the device eighty-five times a day—a “feature” we now know Steve Jobs never considered as he prepared his famous keynote.
These changes crept up on us and happened fast, before we had a chance to step back and ask what we really wanted out of the rapid advances of the past decade. We added new technologies to the periphery of our experience for minor reasons, then woke one morning to discover that they had colonized the core of our daily life. We didn’t, in other words, sign up for the digital world in which we’re currently entrenched; we seem to have stumbled backward into it.
This nuance is often missed in our cultural conversation surrounding these tools. In my experience, when concerns about new technologies are publicly discussed, techno-apologists are quick to push back by turning the discussion to utility—providing case studies, for example, of a struggling artist finding an audience through social media, * or WhatsApp connecting a deployed soldier with her family back home. They then conclude that it’s incorrect to dismiss these technologies on the grounds that they’re useless, a tactic that is usually sufficient to end the debate.
The techno-apologists are right in their claims, but they’re also missing the point. The perceived utility of these tools is not the ground on which our growing wariness builds. If you ask the average social media user, for example, why they use Facebook, or Instagram, or Twitter, they can provide you with reasonable answers. Each one of these services probably offers them something useful that would be hard to find elsewhere: the ability, for example, to keep up with baby pictures of a sibling’s child, or to use a hashtag to monitor a grassroots movement.
The source of our unease is not evident in these thin-sliced case studies, but instead becomes visible only when confronting the thicker reality of how these technologies as a whole have managed to expand beyond the minor roles for which we initially adopted them. Increasingly, they dictate how we behave and how we feel, and somehow coerce us to use them more than we think is healthy, often at the expense of other activities we find more valuable. What’s making us uncomfortable, in other words, is this feeling of losing control —a feeling that instantiates itself in a dozen different ways each day, such as when we tune out with our phone during our child’s bath time, or lose our ability to enjoy a nice moment without a frantic urge to document it for a virtual audience.
It’s not about usefulness, it’s about autonomy.
The obvious next question, of course, is how we got ourselves into this mess. In my experience, most people who struggle with the online part of their lives are not weak willed or stupid. They’re instead successful professionals, striving students, loving parents; they are organized and used to pursuing hard goals. Yet somehow the apps and sites beckoning from behind the phone and tablet screen—unique among the many temptations they successfully resist daily—managed to succeed in metastasizing unhealthily far beyond their original roles.
A large part of the answer about how this happened is that many of these new tools are not nearly as innocent as they might first seem. People don’t succumb to screens because they’re lazy, but instead because billions of dollars have been invested to make this outcome inevitable. Earlier I noted that we seem to have stumbled backward into a digital life we didn’t sign up for. As I’ll argue next, it’s probably more accurate to say that we were pushed into it by the high-end device companies and attention economy conglomerates who discovered there are vast fortunes to be made in a culture dominated by gadgets and apps.
Bill Maher ends every episode of his HBO show Real Time with a monologue. The topics are usually political. This was not the case, however, on May 12, 2017, when Maher looked into the camera and said:
The tycoons of social media have to stop pretending that they’re friendly nerd gods building a better world and admit they’re just tobacco farmers in T-shirts selling an addictive product to children. Because, let’s face it, checking your “likes” is the new smoking.
Maher’s concern with social media was sparked by a 60 Minutes segment that aired a month earlier. The segment is titled “Brain Hacking,” and it opens with Anderson Cooper interviewing a lean, red-haired engineer with the carefully tended stubble popular among young men in Silicon Valley. His name is Tristan Harris, a former start-up founder and Google engineer who deviated from his well-worn path through the world of tech to become something decidedly rarer in this closed world: a whistleblower.
“ This thing is a slot machine,” Harris says early in the interview while holding up his smartphone.
“How is that a slot machine?” Cooper asks.
“Well, every time I check my phone, I’m playing the slot machine to see ‘What did I get?’” Harris answers. “There’s a whole playbook of techniques that get used [by technology companies] to get you using the product for as long as possible.”
“Is Silicon Valley programming apps or are they programming people?” Cooper asks.
“They are programming people,” Harris says. “There’s always this narrative that technology’s neutral. And it’s up to us to choose how we use it. This is just not true—”
“Technology is not neutral?” Cooper interrupts.
“It’s not neutral. They want you to use it in particular ways and for long periods of time. Because that’s how they make their money.”
Bill Maher, for his part, thought this interview seemed familiar. After playing a clip of the Harris interview for his HBO audience, Maher quips: “Where have I heard this before?” He then cuts to Mike Wallace’s famous 1995 interview with Jeffrey Wigand—the whistleblower who confirmed for the world what most already suspected: that the big tobacco companies engineered cigarettes to be more addictive.
“Philip Morris just wanted your lungs,” Maher concludes. “The App Store wants your soul.”
Harris’s transformation into a whistleblower is exceptional in part because his life leading up to it was so normal by Silicon Valley standards. Harris, who at the time of this writing is in his midthirties, was raised in the Bay Area. Like many engineers, he grew up hacking his Macintosh and writing computer code. He went to Stanford to study computer science and, after graduating, started a master’s degree working in BJ Fogg’s famed Persuasive Technology Lab—which explores how to use technology to change how people think and act. In Silicon Valley, Fogg is known as the “millionaire maker,” a reference to the many people who passed through his lab and then applied what they learned to help build lucrative tech start-ups (a group that includes, among other dot-com luminaries, Instagram co-founder Mike Krieger). Following this established path, Harris, once sufficiently schooled in the art of mind-device interaction, dropped out of the master’s program to found Apture, a tech start-up that used pop-up factoids to increase the time users spent on websites.
In 2011, Google acquired Apture, and Harris was put to work on the Gmail inbox team. It was at Google where Harris, now working on products that could impact hundreds of millions of people’s behaviors, began to grow concerned. After a mind-opening experience at Burning Man, Harris, in a move straight out of a Cameron Crowe screenplay , wrote a 144-slide manifesto titled “A Call to Minimize Distraction & Respect Users’ Attention.” Harris sent the manifesto to a small group of friends at Google. It soon spread to thousands in the company, including co-CEO Larry Page, who called Harris into a meeting to discuss the bold ideas. Page named Harris to the newly invented position of “product philosopher.”
But then: Nothing much changed. In a 2016 profile in the Atlantic , Harris blamed the lack of changes to the “inertia” of the organization and a lack of clarity about what he was advocating. The primary source of friction, of course, is almost certainly more simple: Minimizing distraction and respecting users’ attention would reduce revenue. Compulsive use sells, which Harris now acknowledges when he claims that the attention economy drives companies like Google into a “ race to the bottom of the brain stem.”
So Harris quit, started a nonprofit called Time Well Spent with the mission of demanding technology that “ serves us, not advertising,” and went public with his warnings about how far technology companies are going to try to “hijack” our minds.
In Washington, DC, where I live, it’s well-known that the biggest political scandals are those that confirm a negative that most people already suspected to be true. This insight perhaps explains the fervor that greeted Harris’s revelations. Soon after going public, he was featured on the cover of the Atlantic , interviewed on 60 Minutes and PBS NewsHour , and was whisked off to give a TED talk. For years, those of us who were grumbling about the seeming ease with which people were becoming slaves to their smartphones were put down as alarmist. But then Harris came along and confirmed what many were increasingly suspecting to be true: These apps and slick sites were not, as Bill Maher put it, gifts from “nerd gods building a better world.” They were, instead, designed to put slot machines in our pockets.
Harris had the moral courage to warn us about the hidden dangers of our devices. If we want to thwart their worst effects, however, we need to better understand how they’re so easily able to subvert our best intentions for our lives. Fortunately, when it comes to this goal, we have a good guide. As it turns out, during the same years when Harris was wrestling with the ethical impact of addictive technology, a young marketing professor at NYU turned his prodigious focus to figuring out how exactly this techno-addiction works.
Before 2013, Adam Alter had little interest in technology as a research subject. A business professor with a PhD from Princeton in social psychology, Alter studied the broad question of how features in the world around us influence our thoughts and behavior.
Alter’s doctoral dissertation, for example, studies how coincidental connections between you and another person can impact how you feel about each other. “If you find out you have the same birthday as someone who does something horrible,” Alter explained to me, “you hate them even more than if you didn’t have that information.”
His first book, Drunk Tank Pink , cataloged numerous similar cases where seemingly small environmental factors create large changes in behavior. The title, for example, refers to a study that showed aggressively drunk inmates at a Seattle naval prison were notably calmed after spending just fifteen minutes in a cell painted a particular shade of Pepto-Bismol pink, as were Canadian schoolchildren when taught in a classroom of the same color. The book also reveals that wearing a red shirt on a dating profile will lead to significantly more interest than any other color, and that the easier your name is to pronounce, the faster you’ll advance in the legal profession.
What made 2013 a turning point for Alter’s career was a cross-country flight from New York to LA. “I had grand plans to get some sleep and do some work,” he told me. “But as we started taxiing to take off, I began playing a simple strategy game on my phone called 2048. When we landed six hours later, I was still playing the game.”
After publishing Drunk Tank Pink , Alter had begun searching for a new topic to pursue—a quest that kept leading him back to a key question: “What’s the single biggest factor shaping our lives today?” His experience of compulsive game playing on his six-hour flight suddenly snapped the answer into sharp focus: our screens .
By this point, of course, others had already started asking critical questions about our seemingly unhealthy relationship with new technologies like smartphones and video games, but what set Alter apart was his training in psychology. Instead of approaching the issue as a cultural phenomenon, he focused on its psychological roots. This new perspective led Alter inevitably and unambiguously in an unnerving direction: the science of addiction.
To many people, addiction is a scary word. In popular culture, it conjures images of drug addicts stealing their mother’s jewelry. But to psychologists, addiction has a careful definition that’s stripped of these more lurid elements. Here’s a representative example:
Addiction is a condition in which a person engages in use of a substance or in a behavior for which the rewarding effects provide a compelling incentive to repeatedly pursue the behavior despite detrimental consequences.
Until recently, it was assumed that addiction only applied to alcohol or drugs: substances that include psychoactive compounds that can directly change your brain chemistry. As the twentieth century gave way to the twenty-first, however, a mounting body of research suggested that behaviors that did not involve ingesting substances could become addictive in the technical sense defined above. An important 2010 survey paper, for example, appearing in the American Journal of Drug and Alcohol Abuse , concluded that “ growing evidence suggests that behavioral addictions resemble substance addictions in many domains.” The article points to pathological gambling and internet addiction as two particularly well-established examples of these disorders. When the American Psychiatric Association published its fifth edition of the Diagnostic and Statistical Manual of Mental Disorders ( DSM-5 ) in 2013, it included, for the first time, behavioral addiction as a diagnosable problem.
This brings us back to Adam Alter. After reviewing the relevant psychology literature and interviewing relevant people in the technology world, two things became clear to him. First, our new technologies are particularly well suited to foster behavioral addictions. As Alter admits, the behavioral addictions connected to technology tend to be “moderate” as compared to the strong chemical dependencies created by drugs and cigarettes. If I force you to quit Facebook, you’re not likely to suffer serious withdrawal symptoms or sneak out in the night to an internet café to get a fix. On the other hand, these addictions can still be quite harmful to your well-being. You might not sneak out to access Facebook, but if the app is only one tap away on the phone in your pocket, a moderate behavioral addiction will make it really hard to resist checking your account again and again throughout the day.
The second thing that became clear to Alter during his research is even more disturbing. Just as Tristan Harris warned, in many cases these addictive properties of new technologies are not accidents, but instead carefully engineered design features.
The natural follow-up question to Alter’s conclusions is: What specifically makes new technologies well suited to foster behavioral addictions? In his 2017 book, Irresistible , which details his study of this topic, Alter explores the many different “ingredients” that make a given technology likely to hook our brain and cultivate unhealthy use. I want to briefly focus on two forces from this longer treatment that not only seemed particularly relevant to our discussion, but as you’ll soon learn, repeatedly came up in my own research on how tech companies encourage behavioral addiction: intermittent positive reinforcement and the drive for social approval.
Our brains are highly susceptible to these forces. This matters because many of the apps and sites that keep people compulsively checking their smartphones and opening browser tabs often leverage these hooks to make themselves nearly impossible to resist. To understand this claim, let’s briefly discuss both.
We begin with the first force: intermittent positive reinforcement. Scientists have known since Michael Zeiler’s famous pecking pigeon experiments from the 1970s that rewards delivered unpredictably are far more enticing than those delivered with a known pattern. Something about unpredictability releases more dopamine—a key neurotransmitter for regulating our sense of craving. The original Zeiler experiment had pigeons pecking a button that unpredictably released a food pellet. As Adam Alter points out, this same basic behavior is replicated in the feedback buttons that have accompanied most social media posts since Facebook introduced the “Like” icon in 2009.
“ It’s hard to exaggerate how much the ‘like’ button changed the psychology of Facebook use,” Alter writes. “What had begun as a passive way to track your friends’ lives was now deeply interactive, and with exactly the sort of unpredictable feedback that motivated Zeiler’s pigeons.” Alter goes on to describe users as “gambling” every time they post something on a social media platform: Will you get likes (or hearts or retweets), or will it languish with no feedback? The former creates what one Facebook engineer calls “ bright dings of pseudo-pleasure,” while the latter feels bad. Either way, the outcome is hard to predict, which, as the psychology of addiction teaches us, makes the whole activity of posting and checking maddeningly appealing.
Social media feedback, however, is not the only online activity with this property of unpredictable reinforcement. Many people have the experience of visiting a content website for a specific purpose—say, for example, going to a newspaper site to check the weather forecast—and then find themselves thirty minutes later still mindlessly following trails of links, skipping from one headline to another. This behavior can also be sparked by unpredictable feedback: most articles end up duds, but occasionally you’ll land on one that creates a strong emotion, be it righteous anger or laughter. Every appealing headline clicked or intriguing link tabbed is another metaphorical pull of the slot machine handle.
Technology companies, of course, recognize the power of this unpredictable positive feedback hook and tweak their products with it in mind to make their appeal even stronger. As whistleblower Tristan Harris explains: “ Apps and websites sprinkle intermittent variable rewards all over their products because it’s good for business.” Attention-catching notification badges, or the satisfying way a single finger swipe swoops in the next potentially interesting post, are often carefully tailored to elicit strong responses. As Harris notes, the notification symbol for Facebook was originally blue, to match the palette of the rest of the site, “ but no one used it.” So they changed the color to red—an alarm color—and clicking skyrocketed.
In perhaps the most telling admission of all, in the fall of 2017, Sean Parker, the founding president of Facebook, spoke candidly at an event about the attention engineering deployed by his former company:
The thought process that went into building these applications, Facebook being the first of them, . . . was all about: “How do we consume as much of your time and conscious attention as possible?” And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever.
The whole social media dynamic of posting content, and then watching feedback trickle back unpredictably, seems fundamental to these services, but as Tristan Harris points out, it’s actually just one arbitrary option among many for how they could operate. Remember that early social media sites featured very little feedback—their operations focused instead on posting and finding information. It tends to be these early, pre-feedback-era features that people cite when explaining why social media is important to their life. When justifying Facebook use, for example, many will point to something like the ability to find out when a friend’s new baby is born, which is a one-way transfer of information that does not require feedback (it’s implied that people “like” this news).
In other words, there’s nothing fundamental about the unpredictable feedback that dominates most social media services. If you took these features away, you probably wouldn’t diminish the value most people derive from them. The reason this specific dynamic is so universal is because it works really well for keeping eyes glued to screens. These powerful psychological forces are a large part of what Harris had in mind when he held up a smartphone on 60 Minutes and told Anderson Cooper “this thing is a slot machine.”
Let’s now consider the second force that encourages behavioral addiction: the drive for social approval. As Adam Alter writes: “ We’re social beings who can’t ever completely ignore what other people think of us.” This behavior, of course, is adaptive. In Paleolithic times, it was important that you carefully managed your social standing with other members of your tribe because your survival depended on it. In the twenty-first century, however, new technologies have hijacked this deep drive to create profitable behavioral addictions.
Consider, once again, social media feedback buttons. In addition to delivering unpredictable feedback, as discussed above, this feedback also concerns other people’s approval. If lots of people click the little heart icon under your latest Instagram post, it feels like the tribe is showing you approval—which we’re adapted to strongly crave. * The other side of this evolutionary bargain, of course, is that a lack of positive feedback creates a sense of distress. This is serious business for the Paleolithic brain, and therefore it can develop an urgent need to continually monitor this “vital” information.
The power of this drive for social approval should not be underestimated. Leah Pearlman, who was a product manager on the team that developed the “Like” button for Facebook (she was the author of the blog post announcing the feature in 2009), has become so wary of the havoc it causes that now, as a small business owner, she hires a social media manager to handle her Facebook account so she can avoid exposure to the service’s manipulation of the human social drive. “ Whether there’s a notification or not, it doesn’t really feel that good,” Pearlman said about the experience of checking social media feedback. “Whatever we’re hoping to see, it never quite meets that bar.”
A similar drive to regulate social approval helps explain the current obsession among teenagers to maintain Snapchat “streaks” with their friends, as a long unbroken streak of daily communication is a satisfying confirmation that the relationship is strong. It also explains the universal urge to immediately answer an incoming text, even in the most inappropriate or dangerous conditions (think: behind the wheel). Our Paleolithic brain categorizes ignoring a newly arrived text the same as snubbing the tribe member trying to attract your attention by the communal fire: a potentially dangerous social faux pas.
The technology industry has become adept at exploiting this instinct for approval. Social media, in particular, is now carefully tuned to offer you a rich stream of information about how much (or how little) your friends are thinking about you at the moment. Tristan Harris highlights the example of tagging people in photos on services like Facebook, Snapchat, and Instagram. When you post a photo using these services, you can “tag” the other users who also appear in the photo. This tagging process sends the target of the tag a notification. As Harris explains, these services now make this process near automatic by using cutting-edge image recognition algorithms to figure out who is in your photos and offer you the ability to tag them with just a single click—an offer usually made in the form of a quick yes/no question (“do you want to tag . . . ?”) to which you’ll almost certainly answer yes.
This single click requires almost no effort on your part, but to the user being tagged, the resulting notification creates a socially satisfying sense that you were thinking about them . As Harris argues, these companies didn’t invest the massive resources necessary to perfect this auto-tagging feature because it was somehow crucial to their social network’s usefulness. They instead made this investment so they could significantly increase the amount of addictive nuggets of social approval that their apps could deliver to their users.
As Sean Parker confirmed in describing the design philosophy behind these features: “ It’s a social-validation feedback loop . . . exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”
Let’s step back for a moment to review where we stand. In the preceding sections, I detailed a distressing explanation for why so many people feel as though they’ve lost control of their digital lives: the hot new technologies that emerged in the past decade or so are particularly well suited to foster behavioral addictions, leading people to use them much more than they think is useful or healthy. Indeed, as revealed by whistleblowers and researchers like Tristan Harris, Sean Parker, Leah Pearlman, and Adam Alter, these technologies are in many cases specifically designed to trigger this addictive behavior. Compulsive use, in this context, is not the result of a character flaw, but instead the realization of a massively profitable business plan.
We didn’t sign up for the digital lives we now lead. They were instead, to a large extent, crafted in boardrooms to serve the interests of a select group of technology investors.
As argued, our current unease with new technologies is not really about whether or not they’re useful. It’s instead about autonomy. We signed up for these services and bought these devices for minor reasons—to look up friends’ relationship statuses or eliminate the need to carry a separate iPod and phone—and then found ourselves, years later, increasingly dominated by their influence, allowing them to control more and more of how we spend our time, how we feel, and how we behave.
The fact that our humanity was routed by these tools over the past decade should come as no surprise. As I just detailed, we’ve been engaging in a lopsided arms race in which the technologies encroaching on our autonomy were preying with increasing precision on deep-seated vulnerabilities in our brains, while we still naively believed that we were just fiddling with fun gifts handed down from the nerd gods.
When Bill Maher joked that the App Store was coming for our souls, he was actually onto something. As Socrates explained to Phaedrus in Plato’s famous chariot metaphor, our soul can be understood as a chariot driver struggling to rein two horses, one representing our better nature and the other our baser impulses. When we increasingly cede autonomy to the digital, we energize the latter horse and make the chariot driver’s struggle to steer increasingly difficult—a diminishing of our soul’s authority.
When seen from this perspective, it becomes clear that this is a battle we must fight. But to do so, we need a more serious strategy, something custom built to swat aside the forces manipulating us toward behavioral addictions and that offers a concrete plan about how to put new technologies to use for our best aspirations and not against them. Digital minimalism is one such strategy. It’s toward its details that we now turn our attention.