Games and social media apps are wired to keep users coming back for more. Now, Silicon Valley insiders and tech ethicists say its time for a code of conduct.

Dr. David Greenfield, president of the Connecticut Psychological Association, is on Good Morning America, sharing his findings on a simmering topic. The excitement around the internet and technology has seemingly reached an inflection point. The regular rollout of new products from Silicon Valley has newly-minted companies making millions seemingly overnight while amassing legions of users, but a troubling idea emerges as people and relationships become consumed by their screens. Can we be ensnared in the world wide web?

Greenfield has come back from the digital ether with insights on internet addiction. Imminently professorial in his gray suit, round glasses and salt and pepper horseshoe, he categorizes potential internet abusers: gamblers hitting the markets and casinos, shoppers and auction goers, porn seekers, news and information fiends, and people concerned with relationships, emails and chat rooms and other simulacra of society.

“The issue is, there’s something very powerful about the interactive communication people experience online, and they end up getting into situations in their life that can be harmful to them,” Greenfield says.

“They get on there and they surf for hours and hours. I call them ‘electronic vagabonds’ in my book.”

Every new link or banner is like pulling the handle on a slot machine, Greenfield says.

It’s August 23, 1999.


Eighteen years later, in the spring of 2017, Tristan Harris, one-time design ethicist at Google, is brandishing a smartphone before Anderson Cooper and America on 60 Minutes and warning us yet again.

“This thing,” Harris says, “is a slot machine.”

Every like, follow, comment, response, is a reward, Harris explains. It “hijacks the mind.” These are design features, not bugs; the human brain is game-able, and we are currently being played. It’s not neutral, Harris tells Cooper. Silicon Valley needs you to be on their devices, because that is how they make money.

Natasha Schüll is an associate professor of media, culture, and communication at NYU, the author of Addiction by Design and an advisor to Harris’ Center for Humane Technology. Schüll’s studies of slot machines and other systems designed to ensnare users in an addictive cycle have helped shape Harris’ and the Center’s talking points and ideas.

Schüll coined the term “ludic loop” to refer to people’s propensity to do the same activity over and over again on the basis of receiving just enough reward to keep going. Think the slot player who, after inserting 20 coins, wins five on a spin and feels its worth their time to keep playing because they’ve just “won.” The ludic loop can be built by design, and requires four key components: Solitude or a sense of isolation; Random rewards; Fast feedback and; No resolution.

The solitude helps establish a relationship: It’s just you and the process/interaction/activity/app. Once this has been established, the random rewards deliver the barbed hook.

“That seems to be a big one,” Schüll said. The effectiveness of randomized rewards has been well established since the work of B.F. Skinner, who would put animals like primates or pigeons in “Skinner boxes,” which gave stimuli for an action, e.g., a pigeon pecks a switch and gets a pellet of food.

“If things were predictable,” Schüll said, “the pigeon or the person knew exactly when the reward was coming, they’d do something else and then come back to get that reward.” The randomness increases the incentive to try at all times.

Fast feedback—the instant gratification of a like; the leap of joy at a tiny Tinder ember in the corner of the screen—increases the desire to make the next swipe, kill the next monster, pull the next lever. Finally, with no built-in stopping point, the loop can continue unabated by anything but self-control. The gamble is people will not be able to help themselves, and the bet often pays off.

Technologically designed addiction, especially of the social media kind, may not be considered as damaging as drug or gambling addictions because the health effects are less obvious and the financial impact quiet, but Schüll believes it’s a problem nonetheless.

“This stuff really affects us,” she said.

“We have to start seeing time spent … as a great cost, in physical, emotional and monetary terms.”


A baby girl plays with a mobile phone while riding in a New York subway, Dec. 17, 2017. Two major Apple investors have urged the iPhone maker to take action to curb growing smartphone use among children, highlighting growing concern about the effects of gadgets and social media on youngsters. Mark Lennihan/AP Photo/

Renée DiResta saw the negative potential of the social network early. A founding advisor for the Center for Humane Technology, DiResta is currently the head of policy at Data For Democracy, an organization which seeks to promote the ethical use of data and produces data-driven projects which have a “positive and tangible impact on the world,” per their website.

The addictive properties of social media lend themselves to more nefarious ends than simply gaming people for cash. They are also the perfect tools for propaganda campaigns. The recent revelations of Russian agents spreading misinformation to undermine trust in U.S. democracy illustrated the depth of the clandestine system. It is a recognition that may have finally pushed major tech companies, Facebook among them, to look in the mirror. It has certainly gotten the average internet user more interested in the math under the hood of their social media accounts.

In 2015, DiResta’s attempt to find a California preschool with high vaccination rates for her son —anti-vax parents being an issue in California—led to her mapping the anti-vax movement on social networks. As legislation began to change in California, closing opt-out loopholes, DiResta noticed that the anti-vaxer’s arguments changed too.

“I watched the narrative shift as these longtime anti-vaxers, who’ve been on Twitter as anti-vaxers for a very, very long time, all of the sudden morphed into fans of parental choice,” DiResta said.

She watched them shift the message from bunk science and autism scares to one of individual independence, and find an ally in the Tea Party movement. It was a fascinating insight into how conversations and policy interact with each other in real time for DiResta. It also showed her how many fake people—including fully automated bots or partially-automated accounts called “cyborgs”were involved.

“The number of bots, the number of cyborgs,” DiResta said. “We watched how they coordinated their messaging on YouTube, we watched how they organized in secret Facebook groups, and we really began to realize that this was an entire, cross-platform, systemic narrative creation and dispersion effort.”

What has followed is an increasingly fraught journey into something like a Bret Easton Ellis novel, wherein nobody knows who anyone is, where they’re coming from, or their intentions. Trust in the media has been eroded; trust in experts has been eroded; and now, with bots and bad actors all around, trust in each other has finally begun to erode too. It is difficult to tell what is real.

Tools originally developed for the marketing of brands and the amplification of the social network—to win the race for finite attention—are being used to push and promote political messages and ideas. By 2015, the tools were in place for anyone with an agenda to blast it out with all the potency of a corporate brand. As more social networks began to arise or become more coordinated in their message-pushing processes, a realization dawned.

“It wasn’t the specific actor that’s the problem,” DiResta said. “It was the system.”


Today, a burgeoning wave of tech insiders, ethicists, doctors, researchers, designers and investors are attempting to place people, not clicks or engagements or nameless, faceless active users at the forefront of tech.

Early this February, Harris and other former employees of Google and Facebook announced the formation of a new organization to try and put the lid back on Pandora’s box, the Center for Humane Technology (CHT).

The New York Times article announcing its creation describes the CHT as a “union of concerned experts,” a turn of phrase which instantly evokes the Bulletin of Atomic Scientists’ Cassandra of a doomsday clock. Per the Times, the CHT plans on founding an anti-tech addiction lobby and a school-based advertising campaign in conjunction with Common Sense Media.

“The largest supercomputers in the world are inside of two companies—Google and Facebook—and where are we pointing them? We’re pointing them at people’s brains, at children,” Harris told the Times, evoking Sarah Connor warning the world of Skynet and the Terminators.

The Center for Humane Technology marks a noted change in the discourse of Silicon Valley, as tech designers and insiders are now critiquing their own. Harris is joined by an array of former high-powered employees at companies like Facebook, Google and Apple—including the co-creator of the “Like” button, Justin Rosenstein.

On its website, under the heading “The Problem,” the CHT lays out the issue.

For all the benefits Google, Facebook, Instagram, Twitter, et. al. have brought, they have also ignited a mad escalation, what the CHT deftly describes as a “zero-sum race for our finite attention.” Discourse and interaction have been commodified, leveraged for eyes and then dollars. These products, the CHT says, are designed to addict us.

The group also offers a solution, a four-pronged approach: to “inspire humane design,” “apply political pressure,” “create a cultural awakening” and “engage employees.” The latter three involve raising awareness of issues among various elements of society; the consumers who use the products, the employees who make them and the lawmakers who can potentially regulate them. “Humane design” is the idea that companies can design their devices and platforms to avoid gaming users with addictive measures in the first place.

Their first step now is a “Ledger of Harms,” according to the Times, a cataloging of issues for engineers and developers who are concerned about what damage their products may cause, and suggestions to fix them.


Facebook is acknowledging something many already know: Passively scrolling through social media can make you feel bad. The social media giant's platform has become a daily addiction for hundreds of millions of people. Matt Rourke/AP Photo

“I think social media provides both good and bad things,” said Spencer Greenberg.

An entrepreneur and mathematician with a focus on machine learning, Greenberg is attempting to produce more of the good things. Greenberg is the founder of Spark Wave, a “startup foundry” which attempts to design tech products that specifically help solve problems, e.g. UpLift, a self-help app for people with depression.

“I think the most fundamental thing is that sometimes a product’s business model is out of sync with what is valuable to people,” Greenberg said. “And so the way that they make more money is actually not by providing more value.”

Greenberg wishes to raise awareness of the risks inherent to powerful technology: Algorithms, artificial intelligence, machine learning; all are becoming rapidly more dynamic and powerful, and for every scientist at Virginia Tech’s Carilion Research Institute utilizing AI and machine learning to help battle mental health disorders, there is someone else who may be developing ways to use AI’s superior pattern-finding abilities for superhuman espionage, or powering drone assassins.

Especially important is that the next tech leaders know the risks heading in.

“I do think that entrepreneurs, especially new entrepreneurs starting new companies, are going to be one of the most important groups in all of this,” Greenberg said. “Because if they create companies that again have business models that are not aligned with creating human value, we’re going to have a problem.”


What responsibility lays with the people who empower those entrepreneurs, the venture capitalists and other investors?

Silicon Valley seems to have a special propensity for ignoring that most basic of all capitalistic safeguards, the need to turn a profit. In more traditional spheres, a company which has gone too long without making a profit will eventually run out of capital and be forced to close. In Silicon Valley, dollars rain as if from a particularly generous deity. Twitter, for example, has raised around $1.5 billion dollars, all for a company that has turned a profit once—this past quarter.

“We haven’t put a strong incentive on acting in an ethical manner because there wasn’t a disincentive for unethical behavior,” said tech ethicist David Ryan Polgar.

That may be changing, however. Polgar points to the fall of Uber CEO Travis Kalanick as a potential watershed moment. Uber’s spotty track record—from its very beginning—and Kalanick’s character flaws were glossed over as the company continued to expand, add users, and be showered in money. Eventually, enough infractions added up, brave former employees came forward, and Kalanick was finally forced out.

Roger McNamee, a founding advisor of the Center for Humane Technology, co-founder of private equity firm Elevation Partners and both an investor in, and critic of, Facebook, has taken a prominent role in the push to put people back at the forefront of tech.

McNamee noticed the same gaming of Facebook’s systems by bad actors that DiResta had observed, and in an op-ed for Washington Monthly described how he had written to Facebook and had his concerns brushed off by Mark Zuckerberg and COO Sheryl Sandberg. Since then, McNamee has been a leading voice in the people-first movement.

Investment pressure over the addictive qualities of products may be growing as well. Jana Partners and the California State Teachers’ Retirement System, investors in Apple, released a letter pushing the company to take a larger role in producing responsible products for children.


Whether it is investors, entrepreneurs, researchers or lawmakers, Polgar hopes to better link these like-minded individuals through his All Tech is Human initiative.

“One of the frustrations that I had over the years is that I think we actually have an overwhelming amount of talent and ideas about solving a lot of these problems,” Polgar said. “But we don’t necessarily create an ecosystem for airing these.”

Polgar wants All Tech is Human to become that ecosystem. If innovative ideas are thrown out at a conference—on, say, fighting social media addiction—but never leave the hotel, what good are they? It needs to reach the designers, the consumers, the researchers.

Polgar hopes to bring together various organizations, educators, tech companies, thought leaders, students—“all the stakeholders that are being impacted by technology,” which is a dizzying figure considering social media’s current colossus-of-Rhodes reach—and create a system where they can share and present their ideas.

In essence, Polgar is attempting to create a network to fight a network. The issues facing social media right now are simply too complex for any one group of people to solve.

For her part, Schüll, the addictive design expert, believes an outside regulating body akin to the FDA may need to be established in addition to designers and consumers taking some responsibility for the addictive properties of tech. Her idea may be gaining momentum. A recent Axios/Survey Monkey poll of roughly 3,500 adults in the United States found that a majority of Americans are concerned government will not do enough to reign in Silicon Valley.

“We’re not regulating the degree of whatever in a slot machine algorithm or a Candy Crush algorithm,” Schüll said, referring to the not-so-secret mathematical sauce that keeps users coming back to an app or game. There is strong evidence that these algorithms have an impact on user’s habits and minds, but she is less optimistic than others at the CHT about Silicon Valley’s propensity to police itself.

DiResta believes that corporate information sharing, including allowing third-party researchers access, can help defend democracy and stem the flow of misinformation. In the long term, she feels that tech needs to emphasize the design of the systems and what they are incentivizing.

“We need to think about how we are going to be better,” DiResta said. “Because the cohesion of societal values and political systems like democracy are dependent on us doing better.”

Want to see more stories like this? Sign up for The Lowdown, ABP’s weekly roundup of under-the-radar headlines, mind-blowing science and emerging talent. Make your Tuesday a little cooler.

Related Stories