logo

54 pages 1 hour read

Johann Hari

Stolen Focus: Why You Can't Pay Attention--And How to Think Deeply Again

Nonfiction | Book | Adult | Published in 2022

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Chapters 6-8Chapter Summaries & Analyses

Chapter 6 Summary: “Cause Six: The Rise of Technology That Can Track and Manipulate You (Part One)”

Tristan Harris, one of the experts behind the documentary The Social Dilemma, learned as a child that the key to performing a successful magic trick was to test “the limits of attention” (106). Controlling people’s attention made it easy to manipulate their behavior, even though they felt that they were making their own choices. At Stanford University, Harris studied at the Persuasive Psychology Lab, where he learned web design techniques, some based on Skinnerian psychology. While Harris was initially excited about using these psychological insights in his work, he soon became concerned about their potential for unethical manipulation.

After Harris developed an app that allowed readers to highlight phrases and access a concise definition or explanation while they read, his app was bought by Google, which also hired Harris. Google employees were constantly brainstorming ways to make people more engaged with their Gmail accounts. This engagement drove profits, since the more people use their phones the more ad revenue Google can generate. Harris was disturbed that Google affected the attention habits of a billion people by making changes to Gmail’s functions.

Harris asked colleagues to consider the ethics of their work and to stop creating mindless disruptions to user focus. While some colleagues felt that he was overreacting to Google’s design decisions, or inviting unwanted government regulation, many others wanted to continue Harris’s discussion about ethics and responsibility in web design. Harris became Google’s first “design ethicist” (117). Harris felt strongly that Google should stop enticing people to use social media for long periods of time; however, his recommendations were not welcomed by management, since consistent user engagement increases profits. Hari notes that tech companies are not deliberately trying to ruin people’s focus, but that this is “an inescapable effect of their current business model” (118).

After two years of failing to persuade anyone at Google to change their approach, a depressed Harris quit his job to collaborate with coder-turned-activist Aza Raskin. Raskin is famous for inventing the “infinite scroll” (119)—the function that allows websites to go on indefinitely, rather than being divided into individual pages. While Raskin was initially proud of his invention, he soon came to regret that it led to mindless, addictive, and time-wasting scrolling. Raskin became disenchanted with Silicon Valley and was disappointed that the tech world failed to take responsibility for the rise in hostility and inattention. Harris and Raskin’s dissent prompted other tech professionals to speak out against the intentional addictiveness and manipulation of social media sites.

Chapter 7 Summary: “Cause Six: The Rise of Technology That Can Track and Manipulate You (Part Two)”

Instead of functioning as a tool for people to meet up in real life, social media sites like Facebook prioritize user engagement to increase ad revenue. They also scan messages for keywords, tailoring ads to each user (so, typing the word “diaper” in Facebook will result in a slew of advertisements for baby products). In-home technology, such as Amazon Echo and Google Nest Hubs, works in the same way, collecting audio data that it uses to refine advertisements. Over time, these companies can make very accurate predictions about the interests of their users. Harvard social psychologist Shoshana Zuboff calls this new form of advertising “surveillance capitalism” (127).

Hari revisits his claim that social media sites “are designed to be maximally distracting. They need to distract us to make more money” (127). However, “this design is not inevitable” (128)—it is a conscious choice and business model that could be reformed. Hari believes that there must be a healthier way to use devices—a nuanced way of considering new technology that avoid the simplistic debate of being “pro-tech” or “anti-tech” (129).

Another addictive aspect of social media is negativity. Algorithms that determine what users see are programmed to prioritize content that keeps users engaged for longer. Unfortunately, people tend to engage with negative content for longer than positive content; psychologists call this “negativity bias” (131). As a result, social media feeds are dominated by posts revolving around outrage and conflict. This is detrimental in a variety of ways—including being harmful for focus, since negative emotions have been proven to limit people’s ability to listen and pay attention. Hari laments that many users don’t understand the algorithms behind their social media sites, and therefore can’t make informed decisions about engaging with them.

Hari claims that social media is actively reducing society’s ability to identify and solve real problems: When algorithms prioritize “outraging material” (134), fake news proliferates quicker, reaching more users than real news does. This material can significantly influence people: A study on white nationalists revealed that most attributed the start of their radicalization to the internet, and YouTube in particular. For example, Hari attributes the political success of now-ousted Brazilian President Jair Bolsonaro to Facebook, which promoted fake news and false attack ads against his opponent. Hari urges readers to be wary of the “cascading effects” (140) of social media’s influence on attention. One such effect is a limited understanding of the world: “[I]f you expose any country to all this for long enough, it will become a country so lost in rage and unreality that it can’t make sense of its problems and it can’t build solutions” (140).

Chapter 8 Summary: “The Rise of Cruel Optimism”

In his book Indistractible (2019), Israeli-American tech designer Nir Eyal argues that self-discipline is an essential tool against becoming engrossed in one’s devices. Eyal believes that by identifying the “internal triggers” (146) that prompt compulsive internet use, people can understand their behavior and learn how to create new habits. Eyal advises people to pre-plan their day and turn off email and phone notifications to stay focused on their tasks.

While there is some value in Eyal’s approach—Hari successfully implemented some of Eyal’s recommendations—the focus on individual responsibility helps Big Tech evade being held accountable. Hari questions Eyal’s motivations: Eyal’s earlier work offers instructions on how to maximize the addictive nature of technology: Hooked: How to Build Habit-Forming Products (2013), which discusses how to exploit people’s “internal triggers” (148) and coax them into online habits, was highly successful in Silicon Valley.

Hari decides that Eyal’s advice is just “cruel optimism” (149): It sounds hopeful, but it does not address the real problem and thus is likely to fail. Management professor Ronald Purser points out that this kind of “cruel optimism” is harmful because it makes people blame themselves for failing to not be addicted, when a broader system also bears responsibility. Resisting such addiction is a privilege that is unrealistic to expect of everyone.

Hari concludes by reiterating that it is crucial to reform Big Tech to solve the problem of addictive technology.

Chapters 6-8 Analysis

To prime the resistance pump in his readers, Hari uses these chapters to profile several Big Tech whistleblowers, adding urgency that will eventually lead him to argue for The Need For Collective Action. The people he writes about use evocative comparisons to portray tech algorithms as creepily controlling puppeteers—a rhetorical technique that allows Hari to add emotional resonance to his argument. In a metaphor comparing personal data to a “voodoo doll” (125), Aza Raskin explains that the more information tech companies collect about a user’s viewing and purchasing habits, the more their voodoo doll of data comes to resemble the user, making it increasingly likely that the user can be manipulated like the victim of a voodoo curse. Hari calls this data-driven double of a person “ghoulish” (126)—a horror-tinged description of “surveillance capitalism” haunting people’s digital lives that Hari hopes persuades readers that the collection and sale of their data is harmful and frightening. Raskin uses a similar comparison when discussing internet algorithms, which so ably direct people to negative or sensational content that they might as well “start pulling on our marionette strings” (140). In this metaphor, users are helpless puppets whose attention is completely under the power of algorithms. Finally, because some algorithms behave unpredictably even to the inventors themselves, Raskin compares them to Frankenstein’s monster—a powerful creation that has escaped its inventor’s control and is doing real harm to society. This allusion to Mary Shelley’s classic horror novel Frankenstein (1818) positions developers as the reckless and irresponsible Dr. Frankenstein, whose meddling in science he doesn’t fully understand leads to chaos and destruction.

Hari juxtaposes The Individual and Societal Consequences of Distraction created by addictive social media with the blame-avoidance strategies of Big Tech and its advocates. Studies have found that social media tends to exploit people’s “negativity bias” since “If it’s more enraging, it’s more engaging” (131): The most popular YouTube videos contain sensational or negative words like “hates,” “slams” and “destroys” (131), Twitter posts that use “moral outrage” words such as “bad,” “blame,” and “attack” (131) have increased engagement and retweeting, and Facebook posts with “indignant disagreement” are generally more popular than others (131). Even more problematic is the fact that social media exacerbates the problem of fake news. MIT found that fake news proliferates faster than real, fact-checked news on Twitter and Facebook; YouTube whistleblower Guillaume Chaslot revealed that YouTube’s algorithm tends to recommend increasingly negative content, leading viewers to extremist content, which can have a radicalizing effect.

Although these problems are clearly wide-ranging and systemic, the tech industry insists that users are individually responsible for managing their increasing distraction. Web designers like Nir Eyal use “ferociously powerful machinery to get us ‘fiendishly hooked’ and in ‘pain’ until our next techno-fix,” and then propose that “the solution was primarily to change our individual behavior” (149). Hari strongly criticizes this hypocrisy and lack of accountability, emphasizing The Need for Collective Action, and urging people to regulate the industry rather than fall for its tactics.

blurred text
blurred text
blurred text
blurred text