Technology | The Atlantic
The Nomad Who’s Exploding the Internet Into Pieces
May 24th, 2017, 12:35 PM

Dominic Tarr is a computer programmer who grew up on a remote farm in New Zealand. Down in the antipodes, isolation is even more isolating. Getting goods, people, and information to and from Australasia for families like Tarr’s has always been difficult. Bad, unreliable internet service is a particular challenge. Australia and New Zealand are first-world countries with third-world latency.

Today, Tarr lives on a sailboat—another Kiwi staple, alongside sheep and distance. Connectivity is worse on the boat than on the farm, and even less reliable. But that’s by design rather than by misfortune. Tarr started living on the boat after burning out at a previous job and discovering that the peripatetic lifestyle suited him. Unreliable and sporadic internet connectivity became an interesting engineering challenge. What if isolation and disconnection could actually be desirable conditions for a computer network?

He built something called Secure Scuttlebutt, or SSB. It’s a decentralized system for sending messages to a specific community, rather than the global internet. It works by word of mouth. Instead of posting to an online service like Facebook or Twitter, Scuttlebutt applications hold onto their data locally. When a user runs into a friend, the system automatically synchronizes its stored updates with them via local-network transfer—or even by USB stick. Then the friend does likewise, and word spreads, slowly and deliberately.

For the contemporary internet user, it sounds like a bizarre proposition. Why make communication slower, inefficient, and reliant on random interactions between other people? But Tarr and others building SSB applications think it might solve many of the problems of today’s internet, giving people better and more granular control of their lives online and off.

* * *

The term “Scuttlebutt” comes from the original water-cooler gossip. Named for a water cask (a butt) that had been cut (or scuttled), early 19th-century sailors would dish dirt while drawing from it. Being a sailor, Tarr adopted the name thanks to its nautical provenance, an apt description of its behavior. On first blush, that might sound no different from Twitter and Facebook, where gossip reigns. Isn’t the internet decentralized already, for that matter: a network of servers distributed all around the globe?

Sort of. It has always been concentrated in some ways and dispersed in others. The internet’s precursor, ARPANET, was designed to withstand nuclear catastrophe. Geographically distributed servers could communicate with one another absent a central hub, thanks to the communication protocol TCP/IP. The ARPANET’s infrastructure was decentralized, but that design served a central authority: U.S. national defense.

When the web entered public use in the 1990s, it offered a publishing platform without intermediation, as commercial services like AOL had done for online access. And it worked, for a time, while the network and its user base were small. But the web quickly became unmanageable. Keeping a popular server running became too expensive for ordinary folk and too complicated for non-technical people. And so business dissolved the internet into commercial product offerings. Today, that authority rests in the hands of a handful of big companies that run services used by billions of people.

And those billions do indeed gossip online. But the services they use embrace gossip’s content rather than its form. Facebook and Twitter are only like water coolers if there were one, giant, global water cooler for all workplaces everywhere. That sounds empowering at first—people anywhere can see and spread news and ideas from anyone. But those users are entirely reliant on the service operator. Outages, bans, lack of connectivity, or state suppression might get in the way. More often, companies like Twitter, Facebook, and Google change their services’ behavior or the terms of their usage—especially the way customer data is gathered, stored, and used.

Proponents of decentralized services (which are sometimes abbreviated as “decents”) hope to overcome some of these limitations by scattering the software and data that run online services closer to their ultimate points of use. Tarr’s Secure Scuttlebutt isn’t a social network like Twitter or Facebook, nor is it an email client like Gmail. Instead, it’s a platform for encrypted, automated, and local replication of information. Atop this information, new, decentralized versions of services like Twitter—or anything else—can be built.

The key to Scuttlebutt’s operation is a simple approach to copying information between computer systems—a tricky problem due to ever-changing files across many systems. Instead of separate documents and images and other files, like the ones a computer might synchronize via Dropbox, Scuttlebutt treats all data as chunks of content added to the end of a list—like a new entry in a diary. A cryptographic key validates each new entry in the diary, and connects it with its author. This is a bit like how the Bitcoin blockchain works—a list of linked records in a chain of transactions, verified by their cryptographic relationship to the last item in the chain.

But Scuttlebutt doesn’t carry monetary transactions; it carries a payload of, well, gossip content. As it happens, most popular online services are just lists with new content appended. Twitter and Facebook are like that. So are Instagram and Soundcloud. A magazine like The Atlantic could be understood as an append-only list of articles and videos. Even email is, at base, just a pile of content.

So far, the SSB community has made social-network, messaging, music-sharing, and source-control management software that communicate by Scuttlebutt. But unlike Dropbox, Facebook, or every other Cloud service, Scuttlebutt doesn’t synchronize information by connecting to a central server. Instead, it distributes that data to the subscribers a user happens across. There’s no one Scuttlebutt, but as many as there are users.

Scuttlebutt-driven systems synchronize with one another via local networks—say, when a boat docks at port or a mountaineer descends to base camp. For more deliberate sharing, users can connect to an SSB island on the internet called a pub—as in public house; a virtual tavern where gossip can be shared more rapidly. Since no central server is required, no internet access is required either; a local network or data saved to a USB stick and handed to another user are both sufficient. It’s like having a series of private internets that still work like the online services popular on the commercial internet.

* * *

Decentralized software often accompanies political extremes. Bitcoin, a decentralized currency, is associated with the libertarian right, who distrust government. The anarchist left comprises another breed of decent fans. Collective resources and mutual aid are of concern to this group, who hope to replace both market- and state-run services with structures like cooperatives.

Tarr admits that some of the programmers creating and using Secure Scuttlebutt fall squarely into the anarchist camp. But he is more broad-minded about the project’s political aims. For Tarr, the philosophical underpinning of Secure Scuttlebutt is social relativism. Because Scuttlebutt is distributed, each user decides what to do with their network and how to do it. This means that the users of SSB-driven software must consciously deliberate about whom they want to interact with “online,” and where, and why.

Commercial online services, by contrast, regulate user behavior with software and legal controls. Even the way users are identified on a service like Twitter, Instagram, or WhatsApp must conform with the service provider’s wishes. A username is a globally unique ID. Otherwise, how would the service and the users tell one individual from another?

This is only a problem when a service is globalized, built to work for billions of people all at once. But real people deal with duplicate or similar names all the time. They do so by understanding the contexts and communities in which they live. On today’s internet, people don’t get a chance to ponder those circumstances. Instead, every context is the same context: the murky haze of the Cloud. Scuttlebutt doesn’t assume a replacement circumstance; instead it opens the door to many alternatives—the libertarians can have their markets, and the leftists can have their coops, and others can have anything in-between.

In that respect, Secure Scuttlebutt reveals some of the assumptions of the supposedly normal technologies people use. Isn’t it odd that every online service assumes it should be a global one, for example? Such a design benefits technology companies, of course, but there are obvious downsides. Security and abuse offer examples; those problems arise largely in software that insists on being both global and always-on. By offloading the work of synchronizing data to computers, a task computers do well, Tarr hopes Scuttlebutt can help people do what they do well: managing the real-world relationships that would inspire people to connect through software in the first place.

Some of those uses might entail democratic liberation—a fixation of internet activists, especially since the Arab Spring suggested that social media might help combat tyranny. But the central operation of those services also makes them easier for would-be authoritarians to control. Last year, for example, the Turkish government blocked access to Facebook, Twitter, and WhatsApp amidst protests related to the arrest of opposition-party leaders. Scuttlebutt could be used to organize people and disseminate information with less risk of state impediment. However, it also works more slowly and reaches a narrower community. In many cases, that might be fine—and it might avoid social media’s tendency to turn remote, political unrest into global entertainment.

But the likelier uses for SSB might end up being much more commonplace. For one, it works offline. In many parts of the world, access to reliable, affordable networking is a bigger challenge than access to computing. In India, for example, phones are ubiquitous, but network access is costly and slow. Technology companies have proposed solutions that double down on centralization. Google developed weather-balloon wifi to deliver access to Africa and Asia, and Facebook offered free internet on the subcontinent. Scuttlebutt might provide a simpler option with fewer strings and greater utility.

Connectivity loss also affects the first world, especially for those on the move. When in a subway or on a transcontinental flight—or even in a hotel room—networks are frequently unavailable or unreliable. Many services don’t work at all when a device is offline, even just to show what’s been downloaded since the last connection. They certainly don’t let you author new material offline. The cost and complexity of mobile roaming abroad also hampers always-on network usage. And even when accessible and affordable, constant connectivity has become a burden. Today, people often stay online not because they want to be there, but because there’s no way to avoid it.

Security and privacy offer further rationales for a system like SSB. Cyberattacks are common, and more organizations might want to decouple that data, even when encrypted, from the public internet. Decents might offer a solution. And when it comes to privacy, perhaps the best way to protect one’s personal information is to share it selectively for specific purposes. Services like Snapchat and Signal have already demonstrated a public preference for such behavior.  

These rationales all derive from a bigger one: Centralized services are easy to use, but they offer one-size-fits all solutions. Why should a social network for a school or a family or a neighborhood work the same way as one meant for corporate advertisers, or governmental officials, or journalists? Even if Scuttlebutt never catches on, it shows that the future online might be far more customized and diverse than the present. And not just in its appearance, like MySpace or Geocities. But also in its functionality, its means of access, and its membership.

Today, Secure Scuttlebutt is both esoteric and unrefined. For those who aren’t already forking git repositories and hanging out in freenode on IRC, SSB will feel like a curiosity for eccentric technology geeks. But that’s also how the web once seemed, and Google, and Twitter, all the rest, even if it’s hard to remember when those systems were obscure rather than infrastructural.

* * *

In 19th-century Britain, the Church of England’s role as host of the official religion of the United Kingdom came under scrutiny. The Liberal Party’s drive to separate church and state had become viable, as industrialism, nationalism, and secularism rose to prominence. Decoupling the Anglican church from the state was called disestablishment, and its proponents were known as disestablishmentarians. In turn, conservative opposition to disestablishment was called antidisestablishmentarianism. Disestablishment was eventually achieved in Scotland, Ireland, and Wales, but not in England. In the process, antidisestablishmentarianism became the longest non-technical word in the English language—and a nebulous koan with which to shake a fist against the Man.

Lost in the modern misuse of antidisestablishmentarianism is the way it chains historical contingencies. The antidisestablishmentarians weren’t just proponents of the state church. They were opponents to those who hoped to decouple it from the state at a particular moment in time.

Secure Scuttlebutt exemplifies a similar principle, one that some fellow travelers in the decentralized software community have called counterantidisintermediation. The 20th century saw the rise of intermediation: centralized media systems run by corporations and governments. When the web became popular in the mid-1990s, it promised disintermediation—allowing individuals to reach one another directly, without middlemen. But harnessing disintermediation proved hard for ordinary people, and corporations like Google and Facebook discovered they could build huge wealth facilitating those interactions in aggregate. That’s antidisintermediation. Today, decentralized software projects oppose the centralized control of online media. That’s counterantidisintermediation.

The tech entrepreneur and activist Anil Dash has eulogized “the web we lost.” For Dash, that’s the disintermediationist 1990s. But the internet of that era couldn’t work today, even if the world wanted it back. History moves forward, and people must respond to present conditions. Whether via Secure Scuttlebutt or something else, counterantidisintermediationalism could become the driving political, economic, and technical worldview of the near future. If successful, it might find various political and economic implementations. Bitcoin-style anarcho-capitalism is one. Another is its opposite, leftist collectivist anarchism (Dmytri Kleiner, the apparent coiner of counterantidisintermediationalism, calls himself a “venture communist”).* Another is Dominic Tarr’s equal-opportunity, technical agnosticism, a centrist take that sheds the baggage of anarchy entirely.

Tarr’s pitch is appealing, and a poetic consequence of the counterantidisintermediationalist philosophy. Governments and corporations probably shouldn’t be trusted to contain and manage all of modern life. But neither should extremists, left or right, who happen to know how to program computers. A truly decentralized infrastructure wouldn’t just diversify control of its technical operation. It would also diversify political, economic, and cultural goals.

In an age awash with venture capitalists and billionaires, anarcho-capitalists and conspiracy theorists, oligarchs and neo-authoritarians, perhaps the most compelling vision of the technological future is also the most modest. Scuttlebutt offers one model of that humility. Diverse groups of people networked in equally diverse, and even mutually-contradictory ways—for profit, for community, for anarchy, for globalism, and for localism, among others. No revolution whatsoever. Just people of all stripes, in places of all kinds, who sometimes use computers together.


* This article originally misstated that Dmytri Kleiner coined the term “counterantidisintermediationalism.” We regret the error.

What It's Like to Use an Original Macintosh in 2017
May 24th, 2017, 12:35 PM

I’m a reporter first, and a writer second, which means I often find myself writing in odd places. Not just geographically unusual, though there’s that, too. I write everywhere, with whatever technology is at hand.

Most of the time, I’m typing away in a plain text editor on my laptop. But I still write first drafts in reporter’s notebooks, and in the Notes section of my iPhone, and on scraps of paper when necessary.

Now here’s a first for me: I’m writing a story for The Atlantic in MacWrite 4.5, the word processing program first released with the Apple Macintosh in 1984 and discontinued a decade later. So here I am, awash in 1980s computing nostalgia, clacking away in an emulated version of the original software, thanks to the Internet Archive.

The only problem is, how am I going to file this story into The Atlantic’s 2017 web based content management system? (Also, the hyphen key isn’t working.) But more on that in a minute.

First, let me get out of here and switch back to my regular text editor. The Internet Archive’s latest in-browser emulator lets anyone with internet access play and use dozens of games and programs originally released for the first Apple Macintosh computers in all of their black-and-white, low-resolution glory. (Ah, so nice to have that hyphen back.)

I started writing this article in the a MacWrite emulator, a simulation of 1984. (Internet Archive)

Along with MacWrite, the collection includes MacPaint, Dark Castle, The Oregon Trail, Space Invaders, Frogger, Shuffle Puck, Brickles, Prince of Persia, and dozens more. The emulator doesn’t just launch the software itself, but situates users in the old-school Mac operating environment, meaning you often find yourself looking at a 1984-style desktop, and opening the program yourself.

“The presentation represents some shift in philosophies, in terms of what we wanted to do,” says Jason Scott, an archivist at the Internet Archive. Whereas Scott went with a “shock and awe” approach to earlier software emulators—making hundreds of programs available all at once—he decided to go for a more methodical, curated strategy this time. One big reason for this is quality control. He’s still fielding tech-support requests for the MS-DOS emulator the archive released in 2014. (It includes thousands of titles.) But Scott also knew the early Mac programs that people would want to see at the outset.

“The main one was Dark Castle,” Scott told me. “Everyone remembers Dark Castle because it was a particularly well-made, good-looking game—but not even a fun one, I want to point out! People playing it on the Mac emulation are not happy. There are reviews.”

Reviews like: “I can't tell if the emulator is laggy, making my controls unresponsive? Or is this just a terrible game? Maybe a bit of both,” as one person commented on the site.

“They are like, ‘This runs too slow for it to be good,’” Scott told me, “when what they really mean is the game was originally so unfair.”

“But it looks beautiful, and the sound is beautiful, so I knew Dark Castle would be a big deal,” he added.  

For what it’s worth, I only vaguely remember Dark Castle from when I had an Apple IIc. When I tried playing it on the emulator this morning I was repeatedly killed by rabid bats, which I can confidently say is a reflection of my own rustiness and has nothing to do with the emulator quality. (It seemed to run pretty smoothly to me.)

Screen shot of Dark Castle, as played in the Internet Archive’s emulator. (Internet Archive)

But regardless of how well they run, the big question is why it’s worth the drudgery and the painstaking work of presenting ancient programs this way in the first place.

“The existential questions,” Scott said. “What is all this for? What do people need from the original Mac operating systems in the modern era?”

The Internet Archive focused on the Apple II era for a few reasons: It was a finite period of time, it represents a particularly rich moment in computing history, and people remain especially interested in the era. “Nostalgia, to be honest, is a huge chunk of it,” he added. “You’ve got people who come in, and look at the old thing, and they’re happy about the old thing, and then they move on.”

If all goes a planned, the next two emulators will be for the Commodore 64, which predated the early Macintosh; then Windows 98, which came after it. (“That’s only if it works,” Scott emphasized.)

Emulators can be quite buggy, given their complexity. A browser-based system involves the emulated machine running inside the browser's javascript environment, all within the computer running that browser. So, basically, “you’re running a computer within a simulated computer within another computer," Scott says. “It’s crazy.”

Scott’s also hoping to stretch the very idea of what people can do with emulators.

“The initial burst to emulation on the web was about removing the barrier to old software,” Scott told me. “The next realm will be that you can output the data that’s being generated and export it to your modern machine. That’s basically one developer away from happening right now. That’s the kind of thing people eventually will want and get.”

In the meantime, you can’t copy and paste text from the MacWrite emulator back to a contemporary word processor, for example—which is why I had to retype the opening to this story, letter by letter, just to get it into The Atlantic’s web-publishing program. This is still much easier than my predecessors had it, back when the Macintosh was brand new. It was around that time that my colleague James Fallows wrote a long piece for The Atlantic about his own adventure into computerdom. In 1982, he was using a Processor Technology SOL-20 that had 48KB of random access memory. This was miraculous to him then, as were the floppy disks it took, and the printer he hooked up to the machine—it spit out about one page per minute.

It wasn’t all peachy, even for an early adopter like him. There was the time his computer broke in dramatic fashion, sending him back to his old Smith-Corona typewriter for a full month. And also, Fallows wrote: “Computers cause another, more insidious problem, by forever distorting your sense of time.”

What he meant was that computers change people’s expectations about what we should be able to do, and how quickly we should be able to do it. But this observation, made back in 1982 about machines that were quite different from the ones we use today, also got me thinking about how technology collides with people’s perceptions of time as we look back at it years later. Once-miraculous systems seem impossibly slow. They make contemporary software—and the hardware like smartphones running that software—seem newly extraordinary. Watching a 35-year-old program do what it was designed to do is also an implicit reminder that the best tools we have today will, before too long, seem absurd in their limitations.

And we’re able to see all this because so many people, improbably, save objects like old floppy disks and computers.  “I actually still have the SOL-20, walnut case and all,” Fallows recently told me when I asked him what ever happened to it. Scott, from the Internet Archive, says he’s been flooded with requests from people who want to share the programs they’ve held onto all these decades.

“One person, he wasn’t comfortable mailing his floppies to us, so we had to mail him the equipment,” Scott said. “And now he is bringing up one of a kind—or, I should say, extremely rare—software.” His programs, which will be added to the emulator, include original games that are highly sought-after by collectors, and at least one piece of software that was never available commercially.

“This emulation is bringing back into the froth of contemporary culture the existence of all these old programs,” Scott said. “They’re no longer just words on a page.”

Or in my case, they are words on a page. Words rendered in Apple’s familiar old Chicago typeface, materializing on the screen just the way I remember it from so very long ago.

Silicon Valley's Big Three vs. Detroit's Golden-Age Big Three
May 24th, 2017, 12:35 PM

Over the last 20 years, the technology industry has become the most powerful industry in the world, boasting seven of the 20 most profitable companies. Last year, Apple literally doubled the profits ($53.4 billion) of the second-most profitable company, J.P. Morgan Chase ($24.4 billion). And when it comes to market value, tech companies sweep the top five: Apple, Google, Microsoft, Amazon, and Facebook. These companies are not only huge and profitable; they’re also growing.

By most measures, though not all, this power is concentrated in one specific region, the Pacific Coast, and even more tightly centralized in the San Francisco Bay Area. Incredibly, three of those five most valuable companies are located in three adjacent little towns in Silicon Valley. The total distance from Facebook in Menlo Park to Alphabet (née Google) in Mountain View to Apple in Cupertino is just 15 miles.

These companies—with apologies to Intel, Oracle, and Cisco—have become the Big Three of Silicon Valley.

Detroit had a Big Three for decades: General Motors, Ford, and Chrysler. They were also amazingly profitable, industry-leading, and birthed a global industry. In the late 1950s, these three companies had over 90 percent market share in the U.S. car market, which was also the world’s largest.

Now, companies from a similarly small region occupy a similarly dominant role in the economy, which has powered economic growth over the last several decades. But a comparison between Detroit’s Big Three and Silicon Valley’s shows how much the economy around any individual company or place has changed.

* * *

Investors now value tech as they once valued automotive (and oil) companies.

It was the IPO of the decade. Thousands of people flocked to brokers hoping to get their hands on some of the paper from one of the century’s most innovative and respected companies. Finally, finally, the common person could share in the wealth generated by the genius of … Ford.

The year was 1956, and Ford, privately held since its inception by the Ford family and (later) the Ford Foundation, was accessing the public markets. More than 10 million shares went on the market and were immediately snatched up by hundreds of thousands of investors at an opening price of $64.50. The Ford Foundation made $642.6 million in the sale.

It was the biggest IPO ever, as befit the automotive industry, which was the biggest in everything around the mid-century. Likewise, at the time Ford went public, the true behemoth of the American economy, General Motors, was the nation’s most valuable stock, running $263.27. And for good reason.

These companies make a ton of money.

In the second (1956) edition of the Fortune 500, Ford held the third slot in revenue and profit. That year, the company made $437 million dollars. General Motors took the top spot by becoming the first company to break $1 billion in profit that year ($1.19 to be exact). Only 16 companies even made $100 million in 1956. Chrysler was the least profitable of those companies, eking into the echelon with $100.1 million in profits.

The only rival the car industry had was the oil industry, which had the number-two company on Fortune’s list, Exxon Mobil, as well as seven others in the top 20 most profitable companies.

All this to say: making cars and fueling them dominated the American profit-making enterprise. Hell, even the two big tire manufacturers were among the top 35 profit-makers of 1956.

Cars were national. Tech is global.

But there are crucial differences between Detroit’s Big Three and Silicon Valley’s. One is that Silicon Valley’s companies are fully global enterprises.

Since 2015, the majority of Facebook’s ad revenue comes from overseas. Apple crossed that threshold in the first quarter of 2010, and now roughly two-thirds of the company’s revenue comes from abroad. Google, too, has long made a majority of its money outside the U.S., though its home country represents nearly half its revenue.

In fact, all the money that these companies are making overseas is one reason why they are valued so highly, Harvard Business School’s Shane Greenstein told me. “Since the election, the markets have factored in a presumed ‘tax holiday’ that allows firms to repatriate their foreign earnings without U.S. taxes,” Greenstein said. “That especially shapes the values of Apple and Google.”

Since the election, Facebook is up 11 percent, Google is up 21 percent, and Apple is up a gobsmacking 34 percent. Perhaps this is even more remarkable, given that tech company employees gave Hillary Clinton 60 times the money they gave to Donald Trump ($3 million to $50,000).

The tech labor force is a tiny fraction of the automotive industry’s.

The other crucial difference is that tech’s leading companies employee far fewer people than Detroit’s Big Three did. This point can be made in the single chart above,  but it’s worth unpacking in three ways.

One, even though the big industrial giants did employ a lot of people, by the 1950s they were already automating away some of the jobs that they’d just created by building huge factories. “Between 1948 and 1967—when the auto industry was at its economic peak—Detroit lost more than 130,000 manufacturing jobs,” the historian Thomas J. Sugrue has written. To me, that’s startling. This was the absolute golden age of manufacturing, yet in the seat of the most important industry, companies were shedding jobs.

Two, the car companies’ employees were far more concentrated in Detroit and the surrounding cities than the tech companies’. Apple only has 25,000 employees in the “Santa Clara Valley.” Google likely has around 20,000 at its home. And let’s call it at around 6,000 Facebook folks in Menlo Park.

Three, the tech companies have many, many, many subcontractors, from content moderators in the Philippines to manufacturing people at Foxconn in China to custodians on their own campuses to bus drivers dragging people up and down from San Francisco. The way modern companies work, they try to keep employees off their own balance sheets unless absolutely necessary, especially lower-wage workers.

The original Big Three were the motive power for a whole region’s economy. By employing so many people at decent wages, they created broad-based prosperity. In Silicon Valley, the wealth that the Big Three create goes to a much smaller slice of the population, building wealth for thousands instead of hundreds of thousands of workers. In 2016, Facebook generated $600,000 of net income per employee.

That is to say, the tech world, for all its disruptions, is a supercharged example of how the American economy as a whole works right now: The skilled and the already rich make huge amounts of money, and everyone else gets the leftovers.

Podcasting Is the New Talk-Radio
May 24th, 2017, 12:35 PM

Thinking about what technological innovation has done to journalism in the past two decades can be a dizzying experience. People have more data, better maps, prettier visualizations, more push notifications, faster fact-checking, and so on.

Yet there is a unifying feature behind all of these innovations, and it has to do with the role of media and the public in a democracy.

The news media, the argument goes, must provide the rationally-minded members of the public with enough information for them to see a clear and accurate picture of the world, and then become deliberative citizens. In that regard, technology could help news reports to be more accurate, data-driven, timely, fact-checked, with rich multimedia embellishment.

Technologically-enhanced journalism was supposed to become better at conveying the complexities of our reality to the public. Why, then, instead of an enlightened citizenry, did we then find ourselves facing a horde of hateful trolls, hysterical fake news outlets, a news agenda led by Russian hackers, and a never-ending spiral of conspiracy theories?

Maybe something was lost along the way. One of the fundamental problems with that vision of the role of media in democracy—that only imagines media as neutral transmitters of information on which the public then rationally deliberates—is that it might not be enough for the news media to hold a mirror that seek to reflect reality as accurately as possible.

A democratic public only emerges when its members feel concerned with something, and therefore become a public that cares. Here, the public is not an aggregate of rational individuals, but a community that realizes that it is affected by some issues. And to be affected, to be concerned, one has to have some kind of experience or sensation. Journalism, then, should also pay attention to what we could call “sensationalism.” Not in its derogatory meaning of exaggerating facts and events in an inaccurate way, but rather that our senses and perceptions, our sensations, inform knowledge in the most basic and important ways.

Among the technological innovations of the last decade, there’s a discrete yet enduring format that may fulfill such an alternative, “sensational” vision of the role of media in democracy: podcasts.

Of course, there is a wide variety of podcasts styles and tones, but with their conversational color and their immersion in sound and atmospheres, they have the potential to make you feel things. Podcasts bring you to places you’ve never been, they give you the impression of sharing an animated kitchen-table banter (or a loud bar argument) with a couple of friends. In that regard, podcasts are a “sensational” medium, a quality that may explain why millions of listeners tune in regularly and listen to long-form episodes that defy all common-sense knowledge about the shortness of our attention span.

One of my current favorites is Reply All, a Gimlet Media podcast that explores internet culture. Its hosts produce silly and fascinating episodes about, among many other curiosities, videos of rats eating a slice of pizza in the New York subway. But, springing from the same wide-eyed wonder about anything that pops-up from the weird corners of the internet, they also regularly bring about smart and honest reporting about phenomena that shine a vivid light on the current political landscape.

Even further from a traditional current affairs beat is the endless stream of podcasts about TV shows produced by Bald Moves. The two hosts, who are ex-Jehovah witnesses from the Midwest, record absurdly long podcasts where they just chat about the latest episode of Game of Thrones or Westworld. Their success (20 million downloads, and counting) may look like another embodiment of the futility of pop culture, until you realize that part of what they do for hours on is to meticulously debunk crazy fan theories—patiently drawing a line between the factual, the plausible, and the ludicrous. Which seems like a useful skill for a democratic public to have.

Freed from the stranglehold of objective or neutral reporting, podcasters act as storytellers rather than merely as journalists, allowing them to take their audiences around unexplored territories that listeners can experience, and maybe care about.

Sounds familiar? Maybe because that’s the recipe of talk radio, that has perfectly understood the power of someone just talking to an audience. What podcasting adds to the mix is a diversity of voices that were not heard before, and a capacity to reach audiences that were not in the habit of tuning in to the radio at the same time every day or every week.

That seamless integration of podcasts in people’s lives might be the key feature of what is otherwise a relatively low-tech medium that pretty much recycles the codes and craft of radio. Flexibility and chronicity—whenever and wherever you want, but you’ll hear from us again next week—allow podcasters to build a relationship with their audience, a relationship that is made of sensations, friendliness, and familiarity. Not a spectacular innovation, in terms of technology, but maybe just enough of a shift to realize what media theorist James Carey saw as the role of media, that is, to be the “conversation of our culture.”


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

Corporate Surveillance Is Turning Human Workers Into Fungible Cogs
May 24th, 2017, 12:35 PM

Common wisdom tells us that, with time, science fiction soon becomes reality.

The film Gattaca depicted a world of technological advancements in genetic manipulation such that genetic enhancements for offspring are commonplace for who can afford it and employment is strictly dictated by genetic profile—thus reducing the “in-valids,” that is, those without genetic enhancements to second class economic status in the labor market.

Recent technological discoveries such as CRISPR, which allows for the editing of the human genome, may soon transport us all to that world. Hastening our arrival, is a recent piece of legislation, the Preserving Employee Wellness Programs Act, which is now making its way through Congress. That bill would allow employers to collect the genetic information of their employees through workplace wellness programs.

In addition to the expanded collection of genetic information, technological advancements that allow for the wholesale capture of personal  data—information encompassing the minutiae of the public and private lives of American citizens—represent an urgent issue. That is why it’s essential for us to explore the democratic processes available to American workers to re-exert control over the capture and use of their personal information by employers and data brokers alike.

The problems associated with the collection of health data in the workplace are multifold, as I’ve written in the past. In addition to the potential for employment discrimination on the basis of health status, there is also the risk of privacy invasions as the data collected may be sold to third parties without the knowledge or consent of the employee. When it comes to privacy, however, within our democracy, workers seem powerless against a growing trend toward more invasive management practices enabled by emerging technologies.

As my co-authors and I wrote in an article for the California Law Review, while the internet and associated technologies have heralded the advent of the unbounded workplace, freedom from set work hours, and the gig economy, those technological advancements have also ushered in management practices that call for greater surveillance and control over employees’ information, including of the sort of information that would have earlier been deemed outside the purview of the employer.

In an era of “disrupt everything,” we must pause to ask whether emerging technologies are disruptive to the very fabric of our democracy, not just in the mechanical process of voting in the presidential election, but also in the freedom to direct the course of our everyday work lives. After all, what use is a democracy wherein workers are reduced to quantifiable and fungible cogs, to be easily discarded for showing any signs of human frailty?  


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

Historic Rejection Letters to Women Engineers
May 24th, 2017, 12:35 PM

The Society of Women Engineers recently shared a trove of astonishing documents from the group’s archives. They’re letters, loads of them, all directed at women engineering students who had contacted various universities about their interest in connecting with other women studying engineering.

Lou Alta Melton and Hilda Counts, both students at the University of Colorado in 1919, were trying to start their own professional society. Their letters—and the many responses they received—are part of the Society of Women Engineers sprawling archives, which are housed at Wayne State University in Detroit.

“We have not now, have never had, and do not expect to have in the near future, any women students registered in our engineering department,” Thorndike Saville, and an associate professor at the University of North Carolina, wrote in his reply to Melton. He signed it, “Yours very truly.”

“We do not permit women to register in the Engineering School under present regulations,” wrote William Mott, the dean of the Carnegie Institute of Technology, which would later merge with the Mellon Institute to become Carnegie Mellon.

1919 was the year Congress passed the 19th amendment, granting women the right to vote. But, as so many of the letters in the collection demonstrate, many women wouldn’t be permitted to formally study the subjects that interested them until much later. Discrimination against women in engineering isn’t always so straightforward today, but the forces that push women out of the field (or prevent them from pursuing it in the first place) remain persistent and complex. Women account for some 20 percent of engineering graduates, according to Harvard Business Review, but a huge portion of them either quit or never enter the profession. Much has changed for women engineers in the past century, but perhaps not enough.

“I suspect the number of women who have undertaken general engineering courses is so few that you will hardly be able to form an organization,” William Raymond, the dean of the State University of Iowa wrote in 1919, adding, “However, I may be mistaken.”

Some schools seemed to encourage women to find loopholes so they could at least attend classes—but didn’t take the additional step of letting them pursue a degree. “While we cannot legally register women in the College,” wrote J.R. Benton, the dean of engineering at the University of Florida, in 1919, “there is nothing to prevent our admitting them as visitors to the classes, which permits them to get all the benefit of instruction altho without definite status as students.”

“Hitherto, there has been no demand for engineering courses here on the part of women,” he added, “except in one case, that of Leanora Semmes, who is now taking work in Mechanical Drawing.” A quick search of newspaper archives and digitized books provides no evidence that Semmes ever worked as an engineer—or at least no evidence that she was ever recognized for it.

Counts, one of the letter writers from the Society of Women Engineers archive, is remembered as a trailblazer—her electrical engineering degree was the first ever awarded to a woman in Colorado and she later took a job with the Rural Electrification Administration in Washington, D.C. Melton, the other letter writer, made headlines at least once, when in 1920 she took a job as a civil engineer in the U.S. Bureau of Public Roads.

“Leave it to a woman!” the Iowa City Press-Citizen wrote at the time. “That’s what the  United States Bureau of Public Roads in Denver did when an assistant bridge engineer’s job was open. Miss Lou Alta Melton is filling the place in fine shape.” The newspaper described Melton as the only “girl” graduate in her civil engineering class at the University of Colorado.

A clipping in the Iowa City Press-Citizen described Lou Alta Melton’s unusual hiring as a civil engineer. (Newspapers.com)

One response to Melton’s letter came from the secretary of the T-Square Society, a group of women engineers at the University of Michigan that had already formed. They were interested in a potential partnership, the secretary wrote. But these and other early organizing efforts eventually fell apart, as Margaret E. Layne described in her book, Women in Engineering: Pioneers and Trailblazers, “partly because they followed a logic of maintaining professional standards similar to that used by male national organizations. Hence they excluded engineering students and working women engineers without formal education.”

In other words, the high standards for the hypothetical society were deemed necessary to combat sexism, but the sexism that kept women out of formal programs also thwarted efforts to find a critical mass of women engineers for a such a society. It would be decades before the Society of Women Engineers was founded—first as an informal group during World War II, then officially in 1950.

There are still small bright spots in the society’s collection of responses to Melton and Counts. At least one dean of engineering, W.N. Gladson, of the University of Arkansas, wished Melton well. It doesn’t sound like much, but it was more than many other deans were willing to do. “I am aware that in the Northern and Eastern Colleges, often girls register for engineering work and make very excellent students...” Gladson wrote. “Wishing for your organization the fullest measure of success, I am.”

Elsewhere, a professor of mechanical engineering at Georgia Tech seemed to signal that times were changing. (Though he didn’t bother responding to Melton by name.)

“Dear Lady,” wrote J.B. Boon, of Georgia Tech, “Up to the present, women students have not been admitted to [Georgia] Tech.” He added—perhaps optimistically?—that Atlanta officials had taken up the question of women’s suffrage, “so no knowing what may happen!”

How Bots and Humans Might Work Together to Stop Harassment
May 24th, 2017, 12:35 PM

There’re some really bad people who harass journalists. Women and minorities, especially, are the targets of extreme vitriol. Yet many newsrooms do little or nothing to attempt to protect their employees, or to think through how journalists and organizations should respond when harassment occurs.

Harassers and trolls have multiple motivations, often simple racism or misogyny, or in support of misinformation, or to suppress law enforcement or intelligence operations. Frequently, what appears to be multiple harassers are actually sock puppets, Twitter bots, or multiple accounts operated by single individuals.

Sustained harassment can do some serious psychological damage, and I speak from personal experience. Outright intimidation is a related problem, suppressing the delivery of trustworthy news—the kind of news reporting that is vital to democratic governance.

The usual solution is to ignore trolls and harassers, but they can be persistent, and they often game the system successfully. You can mute or block a harasser on Twitter or Facebook, but it's easy enough for them to create a new account in most systems.

If you're knowledgeable in Internet forensics, you can sometimes trace a harasser’s account, and “dox” them—that is, post personally identifiable information as a deterrent. However, that really needs to be done in a manner consistent with site terms and conditions, maybe working with their trust and safety team. (Seriously, this is a major ethical and legal issue.)

Or, if you have a thick skin, you can respond with “shock and awe,” that is, with a brutal response in turn. Or, you can reason with them, which has sometimes been known to work. Retaliation against professionals, however, often backfires. They’re usually well-funded, without conscience, and are often very smart.

One method to address rampant harassment would be for news organizations to work with their security departments to evaluate the worst abuse, and do risk assessment. Sometimes threats are only threats—but sometimes they’re serious. News organizations might share information regarding harassers, while respecting the rights of the accused and the terms and conditions of the organizations involved. There are also serious legal and ethical considerations here, to be considered.

Perhaps news orgs could enlist subscribers or other friends to bring harassment to light. Participants in such a system could simply tweet to the harasser an empty message, or with a designated hashtag, withdrawing approval while avoiding bringing attention to the actual harassment. The empty message might communicate a lot, in zero words.

I believe that the targets of harassment need help from platforms, and here’s the start of a way that could happen. I’m attempting to balance fairness with preventing harassers from gaming the system, so please consider this only a start.

Let’s use Twitter for this thought experiment, mostly because I understand it, and they’re genuinely trying to figure this out.

Suppose you’re a reporter who is a verified user, and you get a harassing tweet. You’d do a quote retweet to a specific account as a way to report the harassment. That specific account would be a bot which could begin to analyze the harassing tweet. The bot would enter the email and IP addresses of the tweet into a database.

Periodically, a process would run to see if there’s a pattern of harassment from that IP or email address, and if so, that account could be suspended and contacted.

While most journalists would find it easy to do such a retweet, perhaps this should be more open to all, which could involve a harassment report button or option in the menu on a particularly tweet. (There’s a button and other means within the Twitter UI to do some of this, and Twitter has signaled that more’s on the way.)

News orgs also need to step up to protect their own reporters.

They could enlist subscribers or other friends to bring harassment to light. Participants in such a system could simply send an automated tweet to the harasser that says “This account has been reported for harassment and is being monitored by the community.” This type of system publicly tells harassers “you are on notice” and the community is watching. Note that this might be easily gamed, unless from verified journalists or similar.

Since this is a significant job, social networks may want to test organizing a volunteer community—like the one Wikipedia has—to help monitor the reports and accounts. Social networks can take it a step further and have trained members of the community respond to some of the harassers (not the bots) to discuss why the tweets were reported for harassment. Teaching moments are important to address harassment. If the social media account user continues the harassment, they get permanently banned from the social network. Some online games have adapted a similar strategy and have had some success with this approach.

I realize these ideas are fairly half-baked; the devil’s in the details. I’m also omitting a lot of detail, since that deeply detailed info could help harassers game this or other systems. In any case, we need to start, somewhere. Harassment and intimidation of reporters is a real problem, with real consequences for democracy.


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

Who Are the Shadow Brokers?
May 23rd, 2017, 12:35 PM

In 2013, a mysterious group of hackers that calls itself the Shadow Brokers stole a few disks full of National Security Agency secrets. Since last summer, they’ve been dumping these secrets on the internet. They have publicly embarrassed the NSA and damaged its intelligence-gathering capabilities, while at the same time have put sophisticated cyberweapons in the hands of anyone who wants them. They have exposed major vulnerabilities in Cisco routers, Microsoft Windows, and Linux mail servers, forcing those companies and their customers to scramble. And they gave the authors of the WannaCry ransomware the exploit they needed to infect hundreds of thousands of computer worldwide this month.

After the WannaCry outbreak, the Shadow Brokers threatened to release more NSA secrets every month, giving cybercriminals and other governments worldwide even more exploits and hacking tools.

Who are these guys? And how did they steal this information? The short answer is: We don’t know. But we can make some educated guesses based on the material they’ve published.

The Shadow Brokers suddenly appeared last August, when they published a series of hacking tools and computer exploits—vulnerabilities in common software—from the NSA. The material was from autumn 2013, and seems to have been collected from an external NSA staging server, a machine that is owned, leased, or otherwise controlled by the U.S., but with no connection to the agency. NSA hackers find obscure corners of the internet to hide the tools they need as they go about their work, and it seems the Shadow Brokers successfully hacked one of those caches.

In total, the group has published four sets of NSA material: a set of exploits and hacking tools against routers, the devices that direct data throughout computer networks; a similar collection against mail servers; another collection against Microsoft Windows; and a working directory of an NSA analyst breaking into the SWIFT banking network. Looking at the time stamps on the files and other material, they all come from around 2013. The Windows attack tools, published last month, might be a year or so older, based on which versions of Windows the tools support.

The releases are so different that they’re almost certainly from multiple sources at the NSA. The SWIFT files seem to come from an internal NSA computer, albeit one connected to the internet. The Microsoft files seem different, too; they don’t have the same identifying information that the router and mail server files do. The Shadow Brokers have released all the material unredacted, without the care journalists took with the Snowden documents or even the care WikiLeaks has taken with the CIA secrets it’s publishing. They also posted anonymous messages in bad English but with American cultural references.

Given all of this, I don’t think the agent responsible is a whistleblower. While possible, it seems like a whistleblower wouldn’t sit on attack tools for three years before publishing. They would act more like Edward Snowden or Chelsea Manning, collecting for a time and then publishing immediately—and publishing documents that discuss what the U.S. is doing to whom. That’s not what we’re seeing here; it’s simply a bunch of exploit code, which doesn’t have the political or ethical implications that a whistleblower would want to highlight. The SWIFT documents are records of an NSA operation, and the other posted files demonstrate that the NSA is hoarding vulnerabilities for attack rather than helping fix them and improve all of our security.

I also don’t think that it’s random hackers who stumbled on these tools and are just trying to harm the NSA or the U.S. Again, the three-year wait makes no sense. These documents and tools are cyber-Kryptonite; anyone who is secretly hoarding them is in danger from half the intelligence agencies in the world. Additionally, the publication schedule doesn’t make sense for the leakers to be cybercriminals. Criminals would use the hacking tools for themselves, incorporating the exploits into worms and viruses, and generally profiting from the theft.

That leaves a nation state. Whoever got this information years before and is leaking it now has to be both capable of hacking the NSA and willing to publish it all. Countries like Israel and France are capable, but would never publish, because they wouldn’t want to incur the wrath of the U.S. Country like North Korea or Iran probably aren’t capable. (Additionally, North Korea is suspected of being behind WannaCry, which was written after the Shadow Brokers released that vulnerability to the public.) As I’ve written previously, the obvious list of countries who fit my two criteria is small: Russia, China, and—I’m out of ideas. And China is currently trying to make nice with the U.S.

It was generally believed last August, when the first documents were released and before it became politically controversial to say so, that the Russians were behind the leak, and that it was a warning message to President Barack Obama not to retaliate for the Democratic National Committee hacks. Edward Snowden guessed Russia, too. But the problem with the Russia theory is, why? These leaked tools are much more valuable if kept secret. Russia could use the knowledge to detect NSA hacking in its own country and to attack other countries. By publishing the tools, the Shadow Brokers are signaling that they don’t care if the U.S. knows the tools were stolen.

Sure, there’s a chance the attackers knew that the U.S. knew that the attackers knew—and round and round we go. But the “we don’t give a damn” nature of the releases points to an attacker who isn’t thinking strategically: a lone hacker or hacking group, which clashes with the nation-state theory.

This is all speculation on my part, based on discussion with others who don’t have access to the classified forensic and intelligence analysis. Inside the NSA, they have a lot more information. Many of the files published include operational notes and identifying information. NSA researchers know exactly which servers were compromised, and through that know what other information the attackers would have access to. As with the Snowden documents, though, they only know what the attackers could have taken and not what they did take. But they did alert Microsoft about the Windows vulnerability the Shadow Brokers released months in advance. Did they have eavesdropping capability inside whoever stole the files, as they claimed to when the Russians attacked the State Department? We have no idea.

So, how did the Shadow Brokers do it? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely for the organization to make that kind of rookie mistake. Did someone hack the NSA itself? Could there be a mole inside the NSA?

If it is a mole, my guess is that the person was arrested before the Shadow Brokers released anything. No country would burn a mole working for it by publishing what that person delivered while he or she was still in danger. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two possibilities. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash, either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible. There’s nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, but that’s just the sort of thing that would be left out. It’s not needed for a conviction.

If the source of the documents is Hal Martin, then we can speculate that a random hacker did in fact stumble on it—no need for nation-state cyberattack skills.

The other option is a mysterious second NSA leaker of cyberattack tools. Could this be the person who stole the NSA documents and passed them on to someone else? The only time I have ever heard about this was from a Washington Post story about Martin:

There was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee [a worker in the Office of Tailored Access Operations], one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.

Of course, “not thought to have” is not the same as not having done so.

It is interesting that there have been no public arrests of anyone in connection with these hacks. If the NSA knows where the files came from, it knows who had access to them—and it’s long since questioned everyone involved and should know if someone deliberately or accidentally lost control of them. I know that many people, both inside the government and out, think there is some sort of domestic involvement; things may be more complicated than I realize.

It’s also not over. Last week, the Shadow Brokers were back, with a rambling and taunting message announcing a “Data Dump of the Month” service. They’re offering to sell unreleased NSA attack tools—something they also tried last August—with the threat to publish them if no one pays. The group has made good on their previous boasts: In the coming months, we might see new exploits against web browsers, networking equipment, smartphones, and operating systems—Windows in particular. Even scarier, they’re threatening to release raw NSA intercepts: data from the SWIFT network and banks, and “compromised data from Russian, Chinese, Iranian, or North Korean nukes and missile programs.”

Whoever the Shadow Brokers are, however they stole these disks full of NSA secrets, and for whatever reason they’re releasing them, it’s going to be a long summer inside of Fort Meade—as it will be for the rest of us.

Facebook Doesn't Understand Itself
May 23rd, 2017, 12:35 PM

Facebook’s 2 billion users post a steady stream of baby pictures, opinions about romantic comedies, reactions to the news—and disturbing depictions of violence, abuse, and self-harm. Over the last decade, the company has struggled to come to terms with moderating that last category. How do they parse a joke from a threat, art from pornography, a cry for help from a serious suicide attempt? And even if they can correctly categorize disturbing posts with thousands of human contractors sifting through user-flagged content, what should they do about it?

This weekend, The Guardian began publishing stories based on 100 documents leaked to them from the training process that these content moderators go through. They’re calling it The Facebook Files. Facebook neither confirmed nor denied the authenticity of the documents, but given The Guardian’s history of reporting from leaks, we proceed here with the assumption that the documents are real training materials used by at least one of Facebook’s content moderation contractors.

The Guardian has so far focused on specific types of cases that come up in content moderation: the abuse of children and animals, revenge porn, self-harm, and threats of violence.  

The moderator training guidelines are filled with examples. Some show moderators being trained to allow remarkably violent statements to stay on the site. This one, for example, is supposed to help content moderators see the difference between “credible” threats of violence and other statements invoking violence.

Check marks mean the statements can stay on Facebook. X marks mean the statements should be deleted by moderators (The Guardian).

The slides suggest that Facebook has begun to come up with rules that cover literally anything distressing or horrible someone could post. But what do they say about the role Facebook sees itself playing in the world it's creating?

In explaining the company’s reasoning about violent posts, a training document says, “We aim to allow as much speech as possible but draw the line at content that could credibly cause real-world harm.”

In the U.S., there is obviously an entire body of legal cases dedicated to parsing the limits and protections of speech. Different places in the world have different rules and norms. But these cases occur in the context of a single national government and its relationship to “free speech.”

Here, we’re talking about a platform, not a government. Facebook is unconstrained by centuries of interpretations of constitutions and legal precedents. It could do whatever it wanted.

They could systematically aim for harm minimization not speech maximization. That change of assumptions would lead to a different set of individual guidelines on posts. The popular children's online world, Club Penguin, for example, offered multiple levels of language filtering as well as an "Ultimate Safe Chat" mode that only allowed pre-selected phrases to be chosen from a list. At one point, a thousand words were being added to the software's verboten list per day. But “allow[ing] as much speech as possible” has been part of the ideology of this generation of social media companies from the very beginning.

Getting people to post more, as opposed to less, is the core of Facebook’s mission as a company. It is no surprise that the companies built on sharing that have been the most successful come from the United States, which is the most pro-free speech country in the world.

From these documents and the company’s statements, the company has pragmatically chosen to limit areas where it has encountered problems. And those problems are primarily quantified through the flagging that users themselves do.

“As a trusted community of friends, family, coworkers, and classmates, Facebook is largely self-regulated,” one document reads. “People who use Facebook can and do report content that they find questionable or offensive.”

Facebook wants to stay out of it. So Facebook reacts, evolving content moderation guidelines to patch the holes where “self-regulation” fails. Given the number of territories and cultures into which Facebook has integrated itself, one can imagine Facebook’s leadership sees this both as the most reasonable and only practical approach. In cases where they have deployed top-down speech limits, they’ve gotten it wrong, too (as in the “Napalm Girl” controversy).

“We work hard to make Facebook as safe as possible while enabling free speech,” said Monika Bickert, Facebook’s Head of Global Policy Management. “This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously.”

Let’s stipulate that these are difficult decisions on an individual basis. And let’s further stipulate that multiplying the problem by 2-billion users makes the task daunting, even for a company with $7 billion on hand. Facebook has committed to adding 3,000 more content moderators to the 4,500 working for the company today.

But is Facebook’s current approach to content moderation built on a firm foundation? The company’s approach to content moderation risks abdicating the responsibility that the world’s most popular platform needs to take on.

“When millions of people get together to share things that are important to them, sometimes these discussions and posts include controversial topics and content,” we read in the training document.  “We believe this online dialogue mirrors the exchange of ideas and opinions that happens throughout people’s lives offline, in conversations at home, at work, in cafes and in classrooms.”

In other words, Facebook holds that the posts on its platform reflect offline realities and are merely a reflection of what is, rather than a causal factor in making things come to be.

Facebook must accept the reality that it has changed how people talk to each other. When we have conversations “at home, at work, in cafes, and in classrooms,” there is not an elaborate scoring methodology that determines whose voice will be the loudest. Russian trolls aren’t interjecting disinformation. My visibility to my family is not dependent on the quantifiable engagement that my statements generate. Every word that I utter or picture that I like is not being used to target advertisements (including many from media companies and political actors) at me.

The platform’s own dynamics are a huge part of what gets posted to the platform. They are less a “mirror” of social dynamics than an engine driving them to greater intensity, with unpredictable consequences.

Facebook’s Mark Zuckerberg seemed to acknowledge this in his epic manifesto about the kind of community that he wanted Facebook to build.

“For the past decade, Facebook has focused on connecting friends and families,” he wrote. “With that foundation, our next focus will be developing the social infrastructure for community—for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.”

To get this “social infrastructure for community” right, Facebook has to acknowledge that it has not merely “connected friends and families." It has changed their very nature.

The Peculiar Prophecies of Nostradonald Trump
May 23rd, 2017, 12:35 PM

Donald Trump doesn’t need a crystal ball, he has a mysterious glowing orb. No, wait. Scratch that. Donald Trump doesn’t need a crystal ball, he has a mysterious clairvoyant Twitter account.

There seems to be, Trump watchers have noticed, a weirdly prophetic tweet in Trump’s past for every new aspect of his presidency—from his weekends golfing at Mar-a-Lago to each new bombshell scoop about the embattled White House and its alleged ties to Russia.

This goes beyond using classic Trump tweets to insult him, though people are doing that, too—the prototypical example comes from June 2014, when Trump tweeted, “Are you allowed to impeach a president for gross incompetence?”

Trump’s critics are now delighting in the ability to criticize Trump by using his own targeted complaints about others. His past tweets underscore stupendous hypocrisy, they say, and perhaps a hint at an epic political downfall. Democrats have been agitating for Trump’s political demise since before he was the Republican nominee, but even the most apolitical observer would acknowledge how uncanny some of Trump’s past tweets have become.

When the Congressional Budget Office determined that Congressional Republicans’ Trump-supported plan to replace the Affordable Care Act would increase the number of uninsured people by 24 million in the next decade, the internet reached for a Trump tweet from 2014: “It’s Thursday. How many people have lost their health care today?” he’d written at the time.

When Trump ordered a missile strike against Syria in April, people shared this Trump tweet from 2013: “The President must get Congressional approval before attacking Syria-big mistake if he does not!”

This one has been making the rounds, too: “PresObama is not busy talking to Congress about Syria..he is playing golf ...go figure,” Trump tweeted in 2013. Fast forward to 2017 and Trump has already outpaced Obama’s presidential golfing rate. (Obama was a prolific golfer.*)

There’s more.

After reports that Trump is considering a massive troop surge in Afghanistan, this 2013 tweet reappeared: “Let’s get out of Afghanistan. Our troops are being killed by the Afghanis we train and we waste billions there. Nonsense!  Rebuild the USA.”

“Is there a name for the eerie way that Trump subtweeted his entire presidency?” Peter Daou, a former Hillary Clinton adviser, said recently. “There’s truly a tweet for every occasion.” Various observers have compared the phenomenon to everything from mass-produced greeting cards to the elegance of mathematics to science fiction.  

“Seems there’s a hypocritical Trump tweet for almost every occasion,” one Twitter user wrote. “They’re like Hallmark cards.”

And another: “His hypocrisy meter uses a Fibonacci number and it just keeps spinning into infinity through space and time...”

The appeal of reaching for Trump’s old tweets is understandable, and not just because people enjoy pointing out the hypocrisy of politicians they dislike. The medium is meaningful here, too. Rarely are schadenfreude and political commentary packaged together so neatly. Tweets are, by the platform’s very nature, succinct, atomized, and imminently shareable. Trump himself has employed the same tactic in an attempt to point out hypocrisy among his celebrity rivals.

Skipping through the linear order of events this way is also a reflection of warped time as a dominant theme in the Trump presidency—both among supporters who want to travel backward in time to Make America Great Again, and among critics who compare him to the time-traveling Back to the Future villain Biff Tannen (or worse.)

Using past tweets as present criticism isn’t just suited to Twitter’s platform, or political culture, or even outright partisanship. This approach also leverages Trump’s blustery style of attacking others as well as the richness of his particular Twitter archive, which goes back to 2009.

And in an irony that’s almost too delectable, there is the fact that so many of Trump’s past attacks against Hillary Clinton in last year’s presidential campaign were based on the premise that she was reckless with classified information—which is now the same criticism Trump faces in one of the biggest scandals of his fledgling presidency. “Crooked Hillary Clinton and her team ‘were extremely careless in their handling of very sensitive, highly classified information.’ Not fit!” he tweeted last July. (Trump’s ongoing refusal to share his tax returns is in similarly sharp contrast to this 2012 tweet: “All recent Presidents have released their transcripts. What is @BarackObama hiding?”)

Given the Russia probe, many of Trump’s old tweets seem to have startling new relevance. Like this one, from October, which people shared amid the news last week that the former FBI Director Robert Mueller had been appointed special counsel to investigate Russian interference in the 2016 presidential election. “If I win,” Trump had tweeted a month before election day, presumably directed at Clinton, “I am going to instruct my AG to get a special prosecutor to look into your situation bc there's never been anything like your lies.”

And this one, from February, which Democrats seized on when The Washington Post revealed Trump had shared highly classified information with Russian leaders in the Oval Office the day after he fired the FBI director James Comey: “The real scandal here is that classified information is illegally given out by ‘intelligence’ like candy. Very un-American!”

Last month, when Trump criticized the Obama administration for having done “nothing” to stop the Assad regime in Syria, people resurrected a string of Trump tweets from 2013: “We should stay the hell out of Syria,” he had tweeted in one case. And also: “Do NOT attack Syria,fix U.S.A.”And also: “Stay away and fix broken U.S.”

And just in case there was any doubt whatsoever: “What I am saying is stay out of Syria.”

This week, after Trump visited Saudi Arabia—where the first lady was photographed without a headscarf—this 2015 Trump tweet resurfaced: “Many people are saying it was wonderful that Mrs. Obama refused to wear a scarf in Saudi Arabia, but they were insulted. We have enuf enemies,” he tweeted in January of that year.

Other figures in the Trump inner circle have made cameos in this internet parlor game. After reports on Monday that Michael Flynn, Trump’s former national security adviser, would invoke his Fifth Amendment right against self-incrimination, a 2013 tweet from Sean Spicer, Trump’s press secretary, sprang back to life online:  “why do u take the 5th if you have done nothing wrong and have nothing to hide?” Spicer had tweeted at the time. It seems to have been a reference to an IRS official who invoked her right not to testify after disclosing the agency’s improper targeting of conservative groups. But untethered from context and time, Spicer’s past commentary seemed linked to Flynn today.

There are so many more examples that “a Trump tweet for everything” has long since crossed over into parody—meaning you should definitely remain skeptical about anything being shared as a past Trump tweet until you verify it for yourself. Consider this delightful but obviously fake mock-up, for example, and always cross reference against the legitimate Trump tweet archive.

For the record, Trump’s pixelated paper trail shows no references to any orb other than the one in words like “Forbes,” “absorb,” and “forbid.” Even in the most surreal political scenarios, there’s only so much you can see coming.

Or, as Trump tweeted in 2013, “Just shows that you can have all the cards and lose if you don’t know what you’re doing.”


* Tracking any president’s time on the green is a longstanding, petty political pastime. Not surprisingly, then, pundits have gone full ouroboros on the Trump-versus-Obama golf question. Conservative commentators are now accusing fact-checking outlets of hypocrisy for tracking Trump’s golf-playing hypocrisy, arguing that fact checkers didn't follow Obama’s golfing schedule as closely. (Many national news organizations, including The New York Times, The Washington Post, and The Atlantic, wrote about Obama’s frequent golfing while he was president.)

A Brief History of SETI@Home
May 23rd, 2017, 12:35 PM

The year was 1999, and the people were going online. AOL, Compuserve, mp3.com, and AltaVista loaded bit by bit after dial-up chirps, on screens across the world. Watching the internet extend its reach, a small group of scientists thought a more extensive digital leap was in order, one that encompassed the galaxy itself. And so it was that before the new millennium dawned, researchers at the University of California released a citizen-science program called SETI@Home.

The idea went like this: When internet-farers abandoned their computers long enough that a screen saver popped up, that saver wouldn’t be WordArt bouncing around, 3-D neon-metallic pipes installing themselves inch by inch, or a self-satisfied flying Windows logo. No. Their screens would be saved by displays of data analysis, showing which and how much data from elsewhere their CPUs were churning through during down-time. The data would come from observations of distant stars, conducted by astronomers searching for evidence of an extraterrestrial intelligence. Each participating computer would dig through SETI data for suspicious signals, possibly containing a “Hello, World” or two from aliens. Anyone with 28 kbps could be the person to discover another civilization.

When the researchers launched SETI@Home, in May of ’99, they thought maybe 1,000 people might sign up. That number—and the bleaker view from outsiders, who said perhaps no one would join the crew—informed a poor decision: to set up a single desktop to farm out the data and take back the analysis.

But the problem was, people really liked the idea of letting their computers find aliens while they did nothing except not touch the mouse. And for SETI@Home’s launch, a million people signed up.  Of course, the lone data-serving desktop staggered. SETI@Home fell down as soon as it started walking. Luckily, now-defunct Sun Microsystems donated computers to help the program get back on its feet. In the years since, more than 4 million people have tried SETI@Home. Together, they make up a collective computing power that exceeds 2008’s premier supercomputer.

But they have yet to find any aliens.

* * *

SETI is a middle-aged science, with 57 years under its sagging belt. It began in 1960, when an astronomer named Frank Drake used an 85-foot radio telescope in Green Bank, West Virginia, to scan two Sun-like stars for signs of intelligent life—radio emissions the systems couldn’t produce on their own, like the thin-frequency broadcasts of our radio stations, or blips that repeated in a purposeful-looking way. Since then, scientists and engineers have used radio and optical telescopes to search much more of the sky—for those “narrowband” broadcasts, for fast pings, for long drones, for patterns distinguishing themselves from the chaotic background static and natural signals from stars and supernovae.

But the hardest part about SETI is that scientists don’t know where ET may live, or how ET’s civilization might choose to communicate. And so they have to look for a rainbow of possible missives from other solar systems, all of which move and spin at their own special-snowflake speeds through the universe. There’s only one way to do that, says Dan Werthimer, the chief SETI scientist at Berkeley and a co-founder of SETI@Home: “We need a lot of computing power.”

In the 1970s, when Werthimer’s Berkeley colleagues launched a SETI project called SERENDIP, they sucked power from all the computers in their building, then the neighboring building. In a way, it was a SETI@Home prototype. In the decades that followed, they turned to supercomputers. And then, they came for your CPUs.

* * *

The idea for SETI@Home originated at a cocktail party in Seattle, when computer scientist David Gedye asked a friend what it might take to excite the public about science. Could computers somehow do something similar to what the Apollo program had done? Gedye dreamed up the idea of “volunteer computing,” in which people gave up their hard drives for the greater good when those drives were idle, much like people give up their idle cars, for periods of time, to Turo (if Turo didn’t make money and also served the greater good). What might people volunteer to help with? His mind wandered to The X-Files, UFOs, hit headlines fronting the National Enquirer. People were so interested in all that. “It’s a slightly misguided interest, but still,” says David Anderson, Gedye’s graduate-school advisor at Berkeley. Interest is interest is interest, misguided or guided perfectly.

But Gedye wasn’t a SETI guy—he was a computer guy—so he didn’t know if or how a citizen-computing project would work. He got in touch with astronomer Woody Sullivan, who worked at the University of Washington in Seattle. Sullivan turned him over to Werthimer. And Gedye looped in Anderson. They had a quorum, of sorts.

Anderson, who worked in industry at the time, dedicated evenings to writing software that could take data from the Arecibo radio telescope, mother-bird it into digestible bits, send it to your desktop, command it to hunt for aliens, and then send the results back to the Berkeley home base. No small task.

They raised some money—notably, $50,000 from the Planetary Society and $10,000 from a Paul Allen-backed company. But most of the work-hours, like the computer-hours they were soliciting, were volunteer labor. Out of necessity, they did hire a few people with operating-system expertise, to deal with the wonky screensaver behavior of both Windows and Macintosh. “It’s difficult trying to develop a program that’s intended to run on every computer in the world,” says Anderson.

And yet, by May 17, 1999, they were up, and soon after, they were running. And those million people in this world were looking for not-people on other worlds.

One morning, early in the new millennium, the team came into the office and surveyed the record of what those million had done so far. In the previous 24 hours, the volunteers had done what would have taken a single desktop one thousand years to do. “Suppose you’re a scientist, and you have some idea, and it’s going to take 1,000 years,” says Anderson. “You’re going to discard it. But we did it.”

After being noses-down to their keyboards since the start, it was their first feeling of triumph. “It was really a battle for survival,” says Anderson. “We didn’t really have time to look up and realize what an amazing thing we were doing.”

Then, when they looked up again, at the SETI@Home forums, they saw something else: “It was probably less than a year after we started that we started getting notices about the weddings of people who met through SETI@Home,” says Eric Korpela, a SETI@Home project scientist and astronomer at Berkeley.

* * *

The SETI astronomers began to collect more and different types of data, from the likes of the Arecibo radio telescope. Operating systems evolved. There were new signal types to search for, like pulses so rapid they would have seemed like notes held at pianissimo to previous processors. With all that change, they needed to update the software frequently. But they couldn’t put out a new version every few months and expect people to download it.

Anderson wanted to create a self-updating infrastructure that would solve that problem—and be flexible enough that other, non-SETI projects could bring their work onboard and benefit from distributed computing. And so BOINC—Berkeley Open Infrastructure for Network Computing—was born.

Today, you can use BOINC to serve up your computer’s free time to develop malaria drugs, cancer drugs, HIV drugs. You can fold proteins or help predict the climate. You can search for gravitational waves or run simulations of the heart’s electrical activity, or any of 30 projects. And you can now run BOINC on GPUs—graphical processing units, brought to you by gamers—and on Android smartphones Nearly half a million people use the infrastructure now, making the système totale a 19 petaflop supercomputer, the third-largest megacalculator on the planet.

Home computers have gotten about 100 times faster since 1999, thank God, and on the data distribution side, Berkeley has gotten about 10 times faster. They’re adding BOINC as a bandwidth-increasing option to the Texas Advanced Computing Center and nanoHUB, and also letting people sign up for volunteer computing, tell the system what they think are the most important scientific goals, and then have their computers be automatically matched to projects as those projects need time. It’s like OkCupid dating, for scientific research. BOINC, and SETI@Home can do more work than ever.

* * *

The thing is, though, they’ve already done a lot of work—so much work they can’t keep up with themselves. Sitting in a database are 7 billion possible alien signals that citizen scientists and their idle computers have already uncovered.

Most of these are probably human-made interference: short-circuiting electric fences, airport radar, XM satellite radio, or a microwave opened a second too soon. Others are likely random noise that added up to a masquerade of significance. As Anderson says, “Random noise has the property that whatever you’re looking for, it eventually occurs. If you generate random letters. You eventually get the complete works of Shakespeare.” Or the emissions are just miscategorized natural signals.

Anderson has been working on a program called Nebula that will trawl that billions-and-billions-strong database, reject the interference, and upvote the best candidates that might—just might—be actual alien signals. Four thousand computers at the Max Planck Institute for Gravitational Physics in Germany help him narrow down the digital location of that holiest of grails. Once something alien in appearance pops up—say from around the star Vega—the software automatically searches the rest of the data. It finds all the other times, in the 18 years of SETI@Home history, that Arecibo or the recently added telescopes from a $100 milion initiative called Breakthrough Listen have looked at Vega. Was the signal there then too? “We’re kind of hoping that the aliens are sending a constant beacon,” says Korpela, “and that every time a telescope passes over a point in the sky, we see it.”

If no old data exists—or if the old data is particularly promising—the researchers request new telescope time and ask SETI colleagues to verify the signal with their own telescopes, to see if they can intercept that beacon, that siren, that unequivocal statement of what SETI scientists and SETI@Home participants hope is true: That we are not alone.

So far, that’s a no-go. “We’ve never had a candidate so exciting that we call the director and say, ‘Throw everybody off the telescope,’” says Werthimer. “We’ve never had anything that resembles ET.”

And partly for that reason, the SETI@Homers are now working on detecting “wideband” signals—ones that come at a spread spectrum of frequencies, like the beam-downs from DIRECTV. Humans (and by extension, extraterrestrials) can embed more information more efficiently in these spread-spectrum emissions. If the goal is to disseminate information, rather than just graffiti “We’re here!” on the fabric of spacetime, wideband is the way to go. And SETI scientists’ thinking goes like this: We’ve been looking mostly for purposeful, obvious transmissions, ones wrapped neatly for us. But we haven’t found any—which might mean they just aren’t there. Extraterrestrial communications might be aimed at members of their own civilizations, in which case they’re more likely to go the DIRECTV route, and we’re likely to find only the “leakage” of those communication lines.

“If there really are these advanced civilizations, it’d be trivial to contact us,” says Werthimer. “They’d be landing on the White House—well, maybe not this White House. But they’d be shining a laser in Frank Drake’s eyes. I don’t see why they would make it so difficult that we would have to do all this hard stuff.”

And so humans, and our sleeping computers, may have to eavesdrop on messages not addressed to us—the ones the aliens send to their own (for lack of a better word) people, and then insert ourselves into the chatter. “I don’t mean to interrupt,” we might someday say, “but I couldn’t help overhearing...” And because of SETI@Home and  BOINC, it might be your laptop that gets that awkward conversation started.

How Applied Mathematics Could Improve the Democratic Process
May 23rd, 2017, 12:35 PM

American voting relies heavily on technology. Voting machines and ballot counters have sped up the formerly tedious process of counting votes. Yet long-standing research shows that these technologies are susceptible to errors and manipulation that could elect the wrong person. In the 2016 presidential election, those concerns made their way into public consciousness, worrying both sides of the political fence. The uncertainty led to a set of last-minute, expensive state recounts—most of which were incomplete or blocked by courts. But we could ensure that all elections are fair and accurate with one simple low-tech fix: risk-limiting audits.

Risk-limiting audits are specific to elections, but they are very similar to the audits that are routinely required of corporate America. Under them, a random sample of ballots is chosen and then hand-counted. That sample, plus a little applied math, can tell us whether the machines picked the right winner.

In nearly all cases, a risk-limiting audit can be performed by counting only a small fraction of ballots cast.  For example, the M.I.T. professor Ron Rivest calculates that Michigan could have checked just 11 percent of its ballots and achieved 95 percent confidence that their machine-counted result correctly named Donald Trump the winner of Michigan's electoral votes. Texas and Missouri, with their wider margins in the presidential race, could have counted a randomly chosen 700 ballots and 10 ballots, respectively, to achieve the same confidence level.  

Since risk-limiting audits verify elections while minimizing the number of audited ballots, they are both inexpensive and speedy. They largely eliminate the need for emergency recruitment of recount workers and can be conducted before the election must be certified by law. This also means that auditing can become a routine part of every election. Regular auditing will also allow state and county electioneers to become more skilled at spotting problems, from mundane system errors to deliberate hacking, something that is difficult for them to do today.

Colorado has been working on audits since 2011, and is ready to take the next step: Risk-limiting audits will be required in Colorado’s 2017 election. More states should follow Colorado’s bold lead.

Yet too many states still have electronic voting machines with no paper trail, meaning that no audit is possible at all. And all audits are not created equal. After the 2016 election, many Wisconsin counties simply ran ballots through their tabulating machines a second time and called it an “audit.” But if the machines were broken or compromised, the same inaccuracies they registered the first time would show up again the second time.

Technology is already deeply embedded in our voting systems. The next step isn’t to pile on more technology; it’s ensuring that the technology we rely on works properly and has not been hacked or undermined. The way to do that is clear: standard election procedure should include risk-limiting audits. If the Nevada Gaming Commission can establish detailed audit requirements for Keno, we can certainly do the same for our democracy.


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

The Westernization of Emoji
May 22nd, 2017, 12:35 PM

When the restaurant Fortune Cookie opened in Shanghai, in 2013, local patrons were mystified. The food was Chinese, but also not Chinese at all. Crab rangoon, sticky orange chicken, and fortune cookies are staples of American Chinese food. They’re rarely found in China.

Fortune Cookie’s owners wanted to introduce China to Chinese food as Americans know it—characterized by startlingly sweet flavors and laughably huge portions. For authenticity, the restaurant’s owners had to import ingredients like Skippy peanut butter and Philadelphia cream cheese. And when restaurant staffers first saw the white-and-red takeout boxes, some of them gathered around to take photographs. The cardboard containers seemed like something out of a sitcom to Chinese workers, who had only ever seen them before on American television shows like Friends and the Big Bang Theory, Fortune Cookie’s owners told news outlets at the time.

“I never saw any fortune cookie in my life until I was a teenager,” said Yiying Lu, a San Francisco-based artist who was born in Shanghai. Lu encountered her first fortune cookie when she left China and moved to Sydney, Australia.

Now, the fortune cookie she designed for the Unicode Consortium will be one of dozens of new emoji that are part of a June update. Lu also created the new emoji depicting a takeout box, chopsticks, and a dumpling.

The irony, she says, is that two of the four new Chinese-themed emoji—the fortune cookie and the takeout box—are not Chinese Chinese, but instead reflect Westernized elements of Chinese culture. “It’s kind of like Häagen-Dazs,” Lu told me. “People think its Scandinavian just because of the two dots in the name, but it’s American. It’s the same thing with the takeout box. The Chinese takeout box is completely invented in the West. And the fortune cookie was invented by a Japanese person, but it was popularized in America.”

Emoji, too, were invented by a Japanese person before becoming hugely popular in the United States. For people outside of Japan, emoji were a charming and mysterious window into Japanese culture. The fact that they weren’t globally representative was part of what made emoji fascinating to people in the Western world.

Shigetaka Kurita, who designed the first emoji in 1999, never expected them to spread beyond Japan. But they did. And now they’re everywhere, thanks to the widespread adoption of the smartphone.

“The whole reason emoji are taking off the way they are is largely because of Apple, which is an American company,” said Christina Xu, an ethnographer who focuses on the social implications of technology. And although the Unicode Consortium—which standardizes how computers communicate text and agrees upon new emoji—it an international group, most of its voting members are affiliated with American companies like Apple, Google, Facebook, Oracle, and IBM. “So even when it is about other cultures, it’s still about America,” Xu said.

Xu, who was born in China and grew up in the United States, says she has “mixed feelings” about the fortune cookie and takeout box emoji, and the extent to which they reflect how Westernized emoji seem to have become in the nearly two decades since Kurita’s first designs.

“I lump the fortune cookie and takeout box into American emoji in the same way that the taco emoji is about the American experience,” she told me. “Because there is this funny sense of feeling like we somehow deserve [certain emoji]. The outrage about the lack of the taco emoji was such a Bay Area thing—like it is inconceivable to us that we could lack representation of things that are central to our specific experience.”

“I identify as Chinese and Chinese American,” she added. “And as a Chinese American, I don’t really feel like we deserve a fortune cookie. It seems so limited. There are 1.5 billion Chinese people all around the world and there are more universal signs of our shared culture than a takeout box or fortune cookies. Those things are so specific to a narrow band of the Chinese experience.”

On the other hand, she says, they’re just emoji. And the fixation with depicting ever more emoji, and ever more realistic emoji, has taken away from some of their inscrutability—which was always a core part of their appeal.

“They accumulate whatever culture gets hanged onto them, and that is the fun part,” Xu said. “So this idea that we’re going to somehow create a truly diverse emoji set, when the concept of diversity itself is so essentially American? It’s almost a disguised form of American cultural dominance. It’s going to a place where it’s overly deterministic.”

Lu, who is also known for her design of the old-school Twitter fail whale, stumbled into emoji art by accident. It all began in a conversation with Jenny 8. Lee, who runs the literary studio Plympton, about how useful a dumpling emoji would be. The pair then launched a Kickstarter campaign advocating for the dumpling with a small group of emoji enthusiasts.

The first dumpling design Lu created had heart eyes. “That one was inspired by the poop emoji because it has a really funny face and it’s just the circle of life,” Lu told me. “You eat a dumpling and it becomes poop.”

The anthropomorphized dumpling didn’t last.

Lu’s first two designs of the dumpling emoji (Yiying Lu)

Emoji food typically don’t have faces, the Unicode Consortium told her, and most foods are portrayed at a 45-degree angle.

Emoji foods are often depicted at a 45-degree angle.

“So I said, ‘Okay, let me do research,’” Lu said. The research involved looking at (and eating) a lot of dumplings. “But it was hard! I had to figure out how do I represent the little folds in a way that it’s still abstract enough and simple enough but iconic enough.”

Lu’s final dumpling design, which was accepted by the Unicode Consortium. (Yiying Lu)

Lu says the dumpling project was a way of making her own “little contribution to cross-cultural communication in the age of globalization,” and notes that she relied on others for cultural feedback in her subsequent designs. The first chopsticks she created were portrayed as crossed, which is considered impolite. Someone pointed this out to Lu on Twitter in response to the draft image. “I was born in China!” she said. “I thought I knew my root culture pretty well, but no! I was wrong.”

In Lu’s initial design, the chopsticks were crossed. (Yiying Lu)
Lu uncrossed the chopsticks for her final design. (Yiying Lu)

Lee, who is the author of The Fortune Cookie Chronicles: Adventures in the World of Chinese Food, says she’s “very proud” to have played a role in bringing the dumpling, takeout box, chopsticks, and fortune cookie to the realm of emoji. (She's also working on making a documentary film about emoji.) And as a non-voting member of the Unicode Consortium’s technical committee, Lee knows first-hand how seriously the group weighs issues related to representation.

“We had this big long debate about whether zombies and vampires can take race,” she told me, referring to the forthcoming zombie and vampire emoji. Ultimately, the consortium decided that people can select different skin tones for zombies and vampires—but not for genies. “They’re just blue,” she said. “The genies are raceless.”

Yiying Lu and Jenny 8. Lee pose with Lu’s dumpling design. (Yiying Lu)

“The people who fight the hardest for certain emoji are usually trying to fight for representation for themselves in some way,” Lee told me. “Most linguists say emoji are not currently a language—they’re paralinguistic, the equivalent of hand gestures or voice tone. But for people who use them, it’s almost like fighting for a word that [shows] you exist. When you come up with a word to describe your population, it’s a very powerful thing.”

Powerful but also impermanent. Language changes constantly. Cultural context shifts. Back in Shanghai, Fortune Cookie stayed open longer than its initial critics predicted, but it still didn’t last. The restaurant closed abruptly last year. Its owners said at the time they’d decided to move back to America.

Getting to Know Your Online Doppleganger
May 22nd, 2017, 12:35 PM

Much has been made of the existence of “filter bubbles,” the information feedback loop in which our preferences and viewpoints are continually amplified. This can happen in the analog world—how many of us would go out of our way to actually spend time with people whose worldviews are radically different from ours?—but is perhaps most often referenced as an artifact of our digital lives. Therein, through sophisticated recommendation algorithms, we are generally, if not always shown the materials we are most likely to like, and at worst, least likely to hate, so as to either instantaneously initiate or at least sustain the possibility of future click-throughs and extended visits to a website.

It has been suggested that filter bubbles were at least partially responsible for the election of Donald Trump, engendering an environment of optimism and overconfidence in the Democratic faithful when in fact the sky was falling around them. Moreover, life in the Democratic filter bubble was presumably not only feeding happy election news to the constituents, it was also possibly keeping out the news about the large cohort of disaffected Democrats who either were not energized enough to get to the polls, or angry enough to ultimately either switch parties, not vote, or come out for the first time in a long time for a populist and hatemongering candidate. In short, after the election, the words you would hear echoing around the filter bubble were: “who knew?”

Technology contributed to the building of ideological walls, but it can also help knock them down. Let’s start with Google, most people’s portal to the world’s information. The days of simple “PageRank,” the anodyne algorithm based almost entirely on the link structure of the web, are long gone. Search now depends on host of other variables, related to the phenomenon (and niche information industry) of “search engine optimization,” the very name of which tells you that searching is something that can be gamed. That said, what if Google—or any web process that returns or pushes information to you—gave you access to a simple dashboard that would allow you to experience the information environment like your different-minded twin digital avatar.

Think of your digital representation as a point in space—which is in fact how many of these companies represent you, but in a space that has possibly hundreds if not thousands of dimensions! This idea would show you how digital life looks like from the other side of that universe. The Wall Street Journal conducted a similar, small-scale experiment of this nature with a program that generated side-by-side views of liberal and conservative Facebook feeds.

Companies could do that without giving you a look under the hood—in other words, you wouldn’t have to know precisely how the algorithm works—giving you a few knobs to turn to potentially change your trip through the digital universe.

What if, alongside Google’s list of “top sites,” you were given a list of randomly chosen sites from the tail? It might even provide a way for Google or another vendor to broaden your tastes. When it came to delivering news, you might find yourself exposed to sites and sources that you would never come into contact with during your daily information strolls. You might find yourself, if but for a moment, walking in another person’s digital shoes.


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

When a Robot Names a New Color of Paint
May 22nd, 2017, 12:35 PM

Janelle Shane had been playing with recurrent neural networks—a type of machine-learning software—for more than a couple months when the computer told her to put horseradish in a chocolate cake.

The request didn’t come out of the blue. Inspired by Tom Brewe, another AI researcher, Shane had been asking her neural net to come up with recipes. She fed it thousands of cookbooks, then asked it to generate new, similar texts. The magic of neural nets is that, even though the computer does not “understand” what a recipe is in the same sensational ways that a human does, it can eventually approximate a recipe well enough to cough out a quasi-realistic one.

That’s what Shane’s did. It told her to combine butter, sugar, eggs, milk, baking power, cocoa, vanilla extract, and peanut butter—with 1 cup of horseradish. And then it told her to boil it in the oven. (The neural net never quite mastered verbs.)

She laughed it off and tweeted about it. But after another AI researcher told her the recipe was actually delicious, she made it for herself and a small dinner party for friends.

“I opened the oven and my eyes just watered,” she told me. “It was horrible. I had never tasted such a horrible chocolate thing in my whole life.”

Shane, 33, is not a professional artificial-intelligence researcher. During the day, she works with laser beams for a small research company in Boulder, Colorado. But she plays around with artificial intelligence in her free time.

Which is where Stoomy Brown comes in.

On Thursday, Shane posted the results of another experiment that has since gone viral. She fed the same neural-network software about 7,700 Sherwin-Williams paint colors. These are the types of impossibly named hues that you see in Home Depot: Burlington green, Terra cotta, Rustic earth. What would happen if a robot tried to simulate them?

At first, it struggled:

Recurrent neural networks “learn” by repeatedly processing the data given to them. Instead of a typical computer program, which runs certain pre-set functions on a large data set, neural networks learn probabilistically what the set “looks” like. As they prepare this model, they spit out new approximations of the data set—data that wasn’t included in the original set, but which could be.

In the case of the type of program that Shane uses, it learns to model character-by-character: It figures out which character are is most likely for a certain spot, then it moves on to the next, and the next after that. Hence the above checkpoint, in which the net has learned that “a” and “e” are both common letters that often go together… but it hasn’t learned much else. (On the upside, Caae Brae does sound like a Beowulf character.)

By the third or fourth checkpoint, Shane’s network got better at modeling paint names:

It also started to spit out amazing coinages—“Rose Hork,” “Burf Pink”—and it had even figured out roughly what colors align with what names. “Navel Tan” is really tan. “Horble Gray” is a type of gray. “Hurky White” is white … and it’s even kind of hurky.

It did not have the same success at all times, though. Note that Ice Gray is a putrid yellow.

Ultimately, by the last checkpoint, Shane noted that:

  1. The neural network really likes brown, beige, and grey.
  2. The neural network has really really bad ideas for paint names.

It also has “Stanky Bean.” And “Bank Butt.”

“The neural net has no concept of color space, and no way to see human-color perception,” she says. Instead, it processed colors by their RGB values: the combination of red, green, and blue that come together in each hue. “It’s really seeing [colors] not as a number at a time, but as a digit at a time. I think that’s why the neural net had a lot of trouble getting the colors right, why it’s naming pinks when there aren’t any pinks, or gray when it’s not gray.”

For her, this experiment—and its viral popularity—has hinted at the strange, savant quality of neural nets. How do 7,700 paint colors, fed into a program and given little other guidance, result in “Burble Simp?” Shane isn’t sure either. “I play around with [neural nets] for pure entertainment purposes. I’m endlessly delighted by what it comes up with, both good and bad,” she says.

She’s also previously used neural nets to generate new death-metal band names (Inbumblious, Vomberdean, and Chaosrug are highlights) and the names of new Pokémon (Tortabool, Minma, and Strangy). Once, a death-metal forum got ahold of the band names and started arguing what genre they should be.

Also, her favorite auto-generated paint colors are Hurky White and Caring Tan. And it’s true those are lovely. But personally, I prefer Turdly.

The Dangers of Reading in Bed
May 19th, 2017, 12:35 PM

Lord Walsingham’s servants found him in bed one morning in 1831, burnt to a crisp. According to a notice in The Spectator, “his remains [were] almost wholly destroyed, the hands and feet literally burnt to ashes, and the head and skeleton of the body alone remained presenting anything like an appearance of humanity.” His wife also suffered a tragic end: Jumping out of the window to escape the fire, she tumbled to her death.

The Family Monitor assigned Lord Walsingham a trendy death. He must have fallen asleep reading in bed, its editors concluded, a notorious practice that was practically synonymous with death-by-fire because it required candles. The incident became a cautionary tale. Readers were urged not to tempt God by sporting with “the most awful danger and calamity”—the flagrant vice of bringing a book to bed. Instead, they were instructed to close the day “in prayer, to be preserved from bodily danger and evil.” The editorial takes reading in bed for a moral failing, a common view of the period.

* * *

The link between morality and mortality was reasonable, in part. Neglected candles could set bed-curtains ablaze and in turn risk the loss of life or property. And so, to lie wantonly in bed with a book was considered depraved.

Writings from the 18th and 19th centuries frequently dramatize the potentially horrifying consequences of reading in bed. Hannah Robertson’s 1791 memoir, Tale of Truth as well as of Sorrow, offers one example. It is a dramatic story of downward mobility, hinging on the unfortunate bedtime activities of a Norwegian visitor, who falls asleep with a book: “The curtains took fire, and the flames communicating with other parts of the furniture and buildings, a great share of our possessions were consumed.”

Even the famous and the dead could be censured for engaging in the practice. In 1778, a posthumous biography chastised the late Samuel Johnson for his bad bedside reading habits, characterizing the British writer as an insolent child. A biography of Jonathan Swift alleged that the satirist and cleric nearly burned down the Castle of Dublin—and tried to conceal the incident with a bribe.

In practice, reading in bed was probably less dangerous than public reproach suggested. Of the 29,069 fires recorded in London from 1833 to 1866, only 34 were attributed to reading in bed. Cats were responsible for an equal number of fire incidents.

Why, then, did people feel threatened by the behavior? Reading in bed was controversial partly because it was unprecedented: In the past, reading had been a communal and oral practice. Silent reading was so rare that in the Confessions, Augustine remarks with astonishment when he sees St. Ambrose glean meaning from a text simply by moving his eyes across the page, even while “his voice was silent and his tongue was still.”

Until the 17th and 18th centuries, bringing a book to bed was a rare privilege reserved for those who knew how to read, had access to books, and had the means to be alone. The invention of the printing press transformed silent reading into a common practice—and a practice bound up with emerging conceptions of privacy. Solitary reading was so common by the 17th century, books were often stored in the bedroom instead of the parlor or the study.

Meanwhile, the bedroom was changing too. Sleeping became less sociable and more solitary. In the 16th and 17th centuries, even royals lacked the nighttime privacy contemporary sleepers take for granted. In the House of Tudor, a servant might sleep on a cot by the bed or slip under the covers with her queenly boss for warmth. By day, the bed was the center of courtly life. The monarchs designated a separate bedchamber for conducting royal business. In the morning, they would commute from their sleeping-rooms to another part of the castle, where they would climb into fancier, more lavish beds to receive visitors.

In early-modern Europe, royals set the tone for bed behavior across broader society. Modest, peasant households commonly lived out of one room. By necessity, the family would share a single bed, or place several simple beds side by side. In larger bourgeois homes with multiple rooms, the bedroom also served as the central family gathering place. The four-poster canopy bed was invented during this period, and with it, the modern notion of privacy. In a busy, one-room household, drawing the bed-curtains closed was a rare opportunity to be alone. And being alone created dangerous opportunities for transgression.

* * *

In his history of masturbation, Solitary Sex, the historian Thomas Laqueur draws a direct link between 18th-century distress over solitary, silent novel reading and masturbation’s new status as a public menace: “Novels, like masturbation, created for women alternative ‘companions of their pillow.’” These “solitary vices,” as Laqueur calls them, were condemned for fear that individual autonomy would lead to a breakdown in the collective moral order.

As sleep transformed from a more public to a more private social practice, the bed became a flashpoint for that anxiety. Ultimately, the real danger posed by reading in bed wasn’t the risk of damage to life or property, but rather the perceived loss of traditional moorings.

Changes to reading and sleeping emphasized self-sufficiency—a foundation of Enlightenment thinking. The new attitude untethered the 18th-century individual from society. A social environment with oral reading and communal sleeping embeds an individual in a community. Falling asleep, a young woman senses her father snoring, or feels her younger sister curled up at her feet. When she hears stories read from the Bible, some figure of authority is present to interpret the meaning of the text.

People feared that solitary reading and sleeping fostered a private, fantasy life that would threaten the collective—especially among women. The solitary sleeper falls asleep at night absorbed in fantasies of another world, a place she only knows from books. During the day, the lure of imaginative fiction might draw a woman under the covers to read, compromising her social obligations.

The celebrated soprano Caterina Gabrielli was presumably reading one such novel when she neglected to attend a dinner party among Sicilian elites at home of the viceroy of Palermo, who had been intent on wooing her. A messenger sent to call on the absent singer found her in the bedroom, apparently so lost in her book, she’d forgotten all about the engagement. She apologized for her bad manners, but didn’t budge from bed.

* * *

Moral panics accompany periods of social transformation. The internet, which has upended the way people read and communicate with others, is the contemporary world’s version of the novel—for good and for ill. Worries about its role parallel that of reading in bed during the 18th century. But now bedtime reading is the object of peril rather than its supposed cause.

“One must acknowledge the triumph [of] the screen,” the novelist Philip Roth told Le Monde in 2013. “I don’t remember ever in my lifetime the situation being as sad for books—with all the steady focus and uninterrupted concentration they require—as it is today. And it will be worse tomorrow and even worse the day after.”

Roth is probably right: Steady focus and uninterrupted concentration require solitude. But ironically, Roth’s 21st-century worry is exactly the opposite of his 18th-century counterparts. Today, when people repose by themselves in bed at night, a buzz of friends and strangers emanates from their screens. Social connection is hardly an issue when reading in bed. Now the problem is that one can never do so alone.


This article appears courtesy of Object Lessons.

20 Questions With Google's Assistant and Apple's Siri
May 18th, 2017, 12:35 PM

MOUNTAIN VIEW, Calif.—If you own an iPhone, there’s yet another way to talk with an artificial intelligence trained on the whole internet and beamed down to your handset from a cluster of computers somewhere in the world.

Tuesday, Google made its artificial-intelligence powered Assistant available for the iPhone. The service, which uses a conversational interface to do things and provide information for users, has been available on Android phones since spring of last year. The move brings the company’s voice interface into direct competition with Apple’s own Siri. For the first time, you can now have both assistants on the same phone in your palm.

Google’s CEO, Sundar Pichai, made the announcement of yesterday’s release at the company’s big developer conference, I/O. This annual gathering is filled with previews of products and sessions for coders, but for the hoi polloi, they are most useful as statements of what these companies think they are. They serve as a platform for promoting the way Google’s executives see their company and the world.

Pichai’s keynote speech was all about “democratizing artificial intelligence.” He’s been building an argument for the last year that the tech world is shifting from “mobile-first” to “AI-first.” And that this change is forcing Google to “reimagine our products for a world that allows a more natural, seamless way of interacting with technology,” as he wrote in a related blog post.  

No consumer-facing technology better exemplifies Pichai’s vision than Google Assistant. Like Apple’s Siri and Amazon’s Alexa, Google Assistant lets people ask it questions or command actions and it attempts to comply.

The Atlantic decided to ask Siri and Google Assistant 20 questions on the iPhone, both as a practical exercise in testing their respective capabilities on Apple’s home court, and also in hopes that the nature of their responses tell us more about the respective visions Google and Apple have for their corporate AI avatars.

You can read on for screenshots. My overall gestalt impression is that the two assistants are pretty evenly matched. Siri, which came out overhyped, has become a good product. Google Assistant, despite playing up a level in the software stack, without operating system-level integration, also performs very well.

They do seem to have their strengths and weaknesses. The problem is that it’s not always clear what causes those differences, which makes it hard to pick one over the other. Why does Apple nail what channel the Cavs game is on but Google does not? Why is Google so good at delivering flight information? Who knows. And that’s one of the strange things about using these products. You’re blindly groping around inside the artificial intelligences that these companies have built, building your own model of their performance. And that’s really the only way to come to understand them. They’re like pet fish. Attached to huge computing clusters. Trained on more data than anyone can possibly imagine.

As corporate avatars, the two assistants share a lot in branding and execution. In their default American mode, both are unfailingly polite and voiced by slightly robotic sounding women. They draw on much of the same data and many of the same capabilities. On a linguistic level, they both present themselves as intermediaries there to search the internet on your behalf. They don’t try to know things for you, but rather to find things for you. “Here’s what I found,” “This came back from search,” “Here’s a result from the web,” “I found a few places.” Siri has bit more pizzazz, like when I asked about the Cavaliers-Celtics betting line and it began its answer, “According to my sources.”

If they seem similar now, I do expect them to differentiate over the next year. What seems clear from Google’s presentations at I/O today is that Assistant is central to the company’s strategy. Can the same be said of Siri’s importance at Apple?

Here are the 20 questions, selected for a range of difficulty and functionality. These are all things that someone on our team has searched or tried to do.

  1. How do I get to Shoreline Amphitheater?

In the first head-to-head matchup, Assistant and Siri both performed well, accurately transcribing my request and delivering up a map with directions from my location to Shoreline Amphitheater, the location of I/O.

In this image and the other examples below: Google Assistant (left), Apple’s Siri (right).  
  1. Are there any places around here for tacos?

Again, the services were a push. They both served up a list of Mexican food places. Nothing fancy, but adequate responses. Points to Google, I suppose, for not suggesting Taco Bell.

  1. What’s the line for the Cavs-Celtics game?

Siri was the clear victor here. Google Assistant defaulted to showing search results, while Siri gave an answer: “according to my sources,” the Cavaliers are favored by 4.

  1. What channel is the Cavs game on?

Siri again delivered the right answer. TNT is that answer, along with MvD, which I think is a Spanish-language deportes channel.

  1. Do I have any new emails?

In retrospect, this was a bad question, but both assistants showed me some emails, which I’d have to blur out here anyway, so I’m skipping the screenshot. Neither made a “but her emails” joke, to my relief.

  1. What’s the safe cooking temperature for pork?

Assistant won this round, delivering a precise response instead of the default to web results.

  1. Play The Coup’s most recent album.

Neither assistant recognized my pronunciation of The Coup, Oakland’s finest communist hip-hop group. There were some nice interpretations, though: the cruise, the couch, the coolest. Silent consonants are tough!

  1. How many times can a baby bottle be reheated?

Neither assistant was a clear winner here. Both delivered search results, although Google Assistant highlighted one (from BabyCenter about formula).

  1. Has Beyoncé ever won a Grammy for best album?

This was Siri’s only total flub. It showed me a Beyoncé album I had on my phone rather than answering the question. Google Assistant listed out the times she had won and then after reading the first page of them to me, added “and other awards,” in an impressively conversational way.

  1. Send Sarah Rich [my wife] a message saying X.

This is still not the easiest way to send texts, but it works in a pinch. The transcriptions in both cases were accurate and both asked my confirmation before sending.

  1. Is Virgin America flight 1 on time?

Google Assistant won again here. It gave a perfect answer, while Siri served up web results.

  1. How often should I water my succulents?

Siri gave web results. Google Assistant’s answer seemed subtly impressive. While it just delivered and read a paragraph from a website, it pulled its succulent advice from the Cactus and Succulent Society of San Jose while I was in San Jose. That was a nice surprise, I thought! Here it was giving me local watering instructions. But then I tried the same search when I got back to Oakland (a good 40 miles away) — and it gave the same response from the San Jose Cactus Society. So, maybe sometimes we give the machine too much credit.

  1. What is 7 percent of 1,456?

Google Assistant surprisingly struggled with my numerical readout here, misinterpreting 1,456 multiple times. Siri nailed it on the first try, impressively changing the transcription of “fourteen hundred fifty sixty” like this: 1400-->1450-->1456.

  1. Do I have any pictures of Sarah Rich [my wife] on my phone?

This worked very well with both assistants, but Google Assistant has the edge in the superiority of its facial recognition algorithm in Google Photos. I have all my pictures stored in both places. Apple Photos recognized my wife in less than 700 pictures. Google Photos found 3,700 pictures. One interesting presentation difference: Siri showed me the oldest photos, Assistant showed me the most recent ones.

  1. When did Lemonade come out?

Flawless execution by both Assistants.

  1. Why Aaliyah have to take that flight?

This is a line from Jadakiss’s “Why.” I didn’t expect either assistant to come up with anything interesting. And indeed, Siri, punted (and also refused to hear me say Aaliyah). Fascinatingly, Google Assistant pulled up the video of the song. That’s fun and smart, and a nice way to leverage YouTube and Google’s lyrics database.

  1. What was Google’s/Apple’s revenue in 2016?

Both assistants gave disappointing answers here. You’d think this would be one of the easiest pieces of information to extract (and in fact other companies do so), yet we got the default search result answer in both cases.

  1. Who wrote Parable of the Sower?

Google Assistant nailed it: Octavia Butler. Siri kept reading the input in as  “Parable of the Sewer,” but its web results contained the correct answer as well as the Biblical citation from which Butler’s novel draws its name.

  1. When is the next full moon?

Siri tromps Google Assistant here, basically due to feeding the request into Wolfram Alpha, which has assembled a lot of this kind of information. Meanwhile, Assistant surprisingly pulls a bad snippet from the Old Farmer’s Almanac.

  1. What kind of sharks are the sharks in Finding Nemo?

This question came courtesy of my son, who at 3-and-a-half, is a real underwater-creature taxonomy nerd. He heard me asking questions to my phone and was like, “Ask it what kind of sharks are the shark in Finding Nemo?” I thought the chance that an actual answer would come back were close to zero. And that was true for Siri. But check out what Assistant returned. This is the single most interesting answer in this whole series. Just more evidence that kids are better at exploring possibilities than adults are.

How Soon Until the Next Ransomware Catastrophe?
May 18th, 2017, 12:35 PM

A little over a week ago, a Cumbrian woman named Joyce broke her foot. What happened next to Joyce’s foot involves the National Security Agency, decades of deferred maintenance on broken software, a hacking group that communicates exclusively in broken English, and an unsophisticated piece of ransomware, all interacting with the global network that almost everyone depends on now.

The success of the WannaCry ransomware that tore through Eurasia over the weekend required a chain of failures. Stopping any one of these failures could have stopped the crisis, and could still stop some of the crises that might otherwise occur. This makes a difference the in lives of normal people who have nothing to do with any of these global players in the computer security game, and it frustrates them.

“Embarrassing that my home PC [is] vastly better tech than the vastly more important health service,” Leslie, a retired electrician in South Cumbria, tweeted on Sunday.

Leslie’s wife, Joyce, is a home-care worker. As she left a client’s house earlier this month, she tripped over the threshold of the door. Joyce knew something was wrong with her foot, and drove herself to the emergency room at Furness General, in Barrow-in-Furness, Cumbria, England. She was X-Rayed, fitted with a plaster cast, and instructed to return for a follow-up appointment. By then, the swelling had improved.

Before she left, she was given another appointment for Friday, May 12. It turned out to be the day the NHS fell victim to the largest ransomware attack in history.

When Joyce and Leslie arrived for the afternoon appointment, “the receptionist was rushing backwards and forwards, I gathered something was wrong with PCs,” Leslie told me. “Reception filled up, all ages, arms, legs in plaster.” (I’m using only first names for Leslie and Joyce, out of concern that talking about their experience could make them targets for online abuse.)

The IT team told staffers to turn off the PCs, but the situation was confused. Soon more senior staff appeared. That’s when Leslie heard someone saying “cyberattack.”

“A smartly dressed woman arrived, and they went round to everyone explaining that the system was down, they couldn’t access X-Rays or patient records. If we had time, we could wait to see if they could clear it, or reschedule the appointment,” Leslie said. “We all thought it was just a local issue, then it became an issue for the local Trust of several hospitals.” By the time the couple got home, the issue was national, and then soon after, international.

The story of WannaCry (also called Wcry and WannaCrypt) begins somewhere before 2013, in the hallways of the National Security Agency, but we can only be sure of a few details from that era. The NSA found or purchased the knowledge of a flaw of MicroSoft’s SMB V.1 code, an old bit of network software that lets people share files and resources, like printers. While SMB V.1 has long been superseded by better and safer software, it is still widely used by organizations that can’t, or simply don’t, install the newer software.

The flaw, or bug, is what what people call a vulnerability, but on its own it’s not particularly interesting. Based on this vulnerability, though, the NSA wrote another program—called an exploit—which let them take advantage of the flaw anywhere it existed. The program the NSA wrote was called ETERNALBLUE, and what they used it to do was remarkable.

The NSA gave themselves secret and powerful access to a European banking transaction system called SWIFT, and, in particular, SWIFT’s Middle Eastern transactions, as a subsequent data-dump by a mysterious hacker group demonstrated. Most people know SWIFT as a payment system, part of how they use credit cards and move money. But its anatomy, the guts of the thing, is a series of old Windows computers quietly humming along in offices around the world, constantly talking to each other across the internet in the languages computers only speak to computers.

The NSA used ETERNALBLUE to take over these machines. Security analysts, such as Matthieu Suiche, the founder of Comae Technologies, believe the NSA could see, and as far as we know, even change, the financial data that flowed through much of the Middle East—for years. Many people have speculated on why the NSA did this, speculation that has never been confirmed or denied. A spokesperson for the agency did not immediately reply to The Atlantic’s request for an interview.

But the knowledge of a flaw is simply knowledge. The NSA could not know if anyone else had found this vulnerability, or bought it. They couldn’t know if anyone else was using it, unless that someone else was caught using it. This is the nature of all computer flaws.

In 2013 a group the world would know later as The Shadow Brokers somehow obtained not only ETERNALBLUE, but a large collection of NSA programs and documents. The NSA and the United States government hasn’t indicated whether they know how this happened, or if they know who The Shadow Brokers are. The Shadow Brokers communicate publicly using a form of broken English so unlikely that many people assume they are native English speakers attempting to masquerade themselves as non-native—but that remains speculative. Wherever they are from, the trove they stole and eventually posted for all the world to see on the net contained powerful tools, and the knowledge of many flaws in software used around the world. WannaCry is the first known global crisis to come from these NSA tools. Almost without a doubt, it will not be the last.

A few months ago, someone told Microsoft about the vulnerabilities in the NSA tools before The Shadow Brokers released their documents. There is much speculation about who did this, but, as with so many parts of this story, it is still only that—speculation. Microsoft may or may not even know for sure who told them. Regardless, Microsoft got the chance to release a program that fixed the flaw in SMB V.1 before the flaw became public knowledge. But they couldn't make anyone use their fix, because using any fix—better known as patching or updating—is always at the discretion of the user. They also didn't release it for very old versions of Windows. Those old versions are so flawed that Microsoft has every reason to hope people stop using them—and not just because it allows the company to profit from new software purchases.

There is another wrinkle in this already convoluted landscape: Microsoft knew SMB V.1, which was decades old, wasn’t very good software. They’d been trying to abandon it for 10 years, and had replaced it with a stronger and more efficient version. But they couldn’t throw out SMB V.1 completely because so many people were using it. After WannaCry had started its run around the world, the head of SMB for Microsoft tweeted this as part of a long and frustrated thread:

The more new and outdated systems connect, the more chance there is to break everything with a single small change.

We live in an interconnected world, and in a strange twist of irony, that interconnectedness can make it difficult to change anything at all. This is why so many systems remain insecure for years: global banking systems, and Spanish telecoms, and German trains, and the National Health Service of the United Kingdom.

Furness General Hospital was thrown into chaos by the first (and doubtless not last) ransomware computer worm, WannaCry. Ransomware works by taking all the data on your computer hostage until you pay for the key to get it back. A worm works by scanning the network and replicating itself in other computers without human assistance. Nasty stuff, and a nasty combination.

Scanning is how computers find out what's available at different addresses on the internet. A computer can send data to a port, and wait for a reply. It would be as if someone came to your front door and tried saying, “Hello!” in every language, until you said, “¡Hola!” back, and then they noted down that you spoke Spanish.

The attackers behind WannaCry sent these Hellos, and waited for something speaking SMB V.1 to reply. When that happened, they sent the NSA’s exploit, which allowed them to send a new program to the computer that spoke SMB V.1. The computer, once exploited, then ran the program sent by the attackers. That program was WannaCry.

When WannaCry runs, it does a few things in a specific order. First it looks in the memory of the computer it’s on to see if another WannaCry is already running on the machine, and stops if it finds anything there. Then—and this is a strange step for it to take—WannaCry looks for an address on the net, and if it finds that address, it again shuts down and does nothing. If neither the memory shows other WannaCry, and it can’t see that mysterious domain, it starts to scan for other computers that talk SMB V.1, both on the net, and local computers that were never supposed to touch the net. If it finds more computers talking SMB V.1, the cycle repeats over again.

At the same time, WannaCry starts to encrypt all the files on the computer it’s running on. It doesn’t move the files or even read them, it just puts them into an indecipherable state. After that, WannaCry’s last stop is to show a message on the screen: the infamous request for $300 in Bitcoin upfront, $600 if you wait, and no decryption key forever if you wait a week. All you have after a week is indecipherable text where your files used to be.

There are clues to suggest that WannaCry wasn’t written by sophisticated attackers. The domain that it checks when it’s first installed acts as a kill switch on further infection. (It was caught early by a researcher who goes by @MalwareTech while examining WannaCry. He registered the domain, and pointed it to a server he controlled to see what it did, and what it did was stop the first wave of infection.)

Later, someone or someones, possibly even the original attacker, edited WannaCry to check another domain. This time Suiche found the new domain while following and analyzing WannaCry. Suiche registered it and pointed it at his server—stopping the infection again. This game of whack-a-mole with domains has gone through a few more iterations since then, but no one is sure why the domain kill switch is even there. There’s speculation, of course. If there’s one thing surrounding WannaCry, it’s an abundance of speculation.

Besides the kill switch, the payment system was amateurish as well. Most ransomware payment systems are automated, but despite designing something that would burn its way through the internet in record time, the purveyors of WannaCry set it up so that they’d have to deal with ransom payments and decryption individually. This does not scale to attacking the whole world.

Like most people in security, Suiche puts most of the blame on not patching and upgrading software. “Companies need to be better prepared with backup strategies and up to date systems!” he said. “We got lucky today because this variant was caught early enough that no further damages had been done, but we need to be prepared for tomorrow!”

But the sprawling companies that are vulnerable to attacks like this one have a fragile network not just of computers, but of contracts, vendors, service agreements, and customers who are deeply impacted by any downtime. No middle manager is eager to be scolded for systems downtime that may avert some abstract hacker threat in the future, and no one gets called into their boss’ office and patted on the back when WannaCry doesn’t hit their servers. Suiche says he understands that patching and upgrading is not easy on these complex systems, but “neither is trying to recover data on a quarter million systems. No solution is easy, people need to pick their battles.”

He’s not wrong. But when it comes to the real world of global operations, many security professionals—even the very good ones—suffer from being right, but not always useful.

The global network may be easy prey, but the creators of WannaCry weren’t the predators. They were just the scavengers that came years after the NSA had developed the exploit in the first place.

Back in 2013, when the NSA was using its suite of tools to quietly attack Windows-based servers, the Snowden documents showed that Microsoft had been cooperating with the spy agency to decrypt messages as part of their unrelated Prism program. If the NSA had told the company about the flaw back then, the patch could have been in place years ago, and The Shadow Brokers’ theft and subsequent release of the NSA’s exploit wouldn't have sent British patients home, waiting for a phone call telling them when they could see doctors again. Spanish telecommunications and German trains would have run normally on Friday. Systems all over the world would not have been broadcasting the now-iconic, “Oops your files have been encrypted” on billboards and kiosks and sale terminals.

The NSA played both sides against the middle, and got caught.

On the Sunday after WannaCry, Microsoft's Chief Legal Officer, Brad Smith, posted a statement: “We need governments to consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits,” Microsoft’s chief legal officer, Brad Smith, wrote in a blog post on Sunday. “This is one reason we called in February for a new ‘Digital Geneva Convention’ to govern these issues, including a new requirement for governments to report vulnerabilities to vendors, rather than stockpile, sell, or exploit them.”

WannaCry wasn’t a disaster, but it could have been. Right now, it’s a cautionary tale of what can happen when spy agencies collide with technical debt and computer illiteracy. It’s an ecological disaster waiting to happen, no less complex for being an artificial ecology. In its way, this is no less vital than dealing with disease or pollution or environmental destruction, but it’s harder to see, because we did it to ourselves.

Gratitude for Invisible Systems
May 18th, 2017, 12:35 PM

Before asking the question of how technology can affect democracy, I’m going to ask: What is democracy for?

In a developed, post-industrial country at the start of the twenty-first century, one of the main functions of a democratic political system is to help us collectively manage living in a complex, global society. Our daily lives take place in a network of technological, socio-technical, and social systems that we barely notice, except when things go wrong.

To start with, there are the infrastructural systems that fill out the bottom of Maslow’s pyramid of needs: clean water on tap, the ability to flush away disease-causing waste, natural gas for warmth and food preparation, and raw energy in the form of electricity, for heat and light, to replace physical labor, and to power cooling and electronics. Moving up Maslow’s pyramid, these systems underpin communication, community and self-actualization: connections to the rest of the world in the form of telecommunications and postal mail, physical links in the form of roads and a subway that link to rail, airports, and more.

While they’re far from perfect, these systems work well enough that mostly we don’t think about them. When they do fail, especially as a result of lack of care or maintenance (like interstate highway bridge collapses or the ongoing water crisis in Flint, Michigan), we recognize it as the profound and shocking betrayal that it is.

Besides these physical networks, there are a host of other systems that exist primarily to contribute to the common good by taking on responsibility for safety, access and planning. I don’t have to know where my breakfast eggs came from to know they’re safe to eat, because of the United States Department of Agriculture. When I fill a prescription, the pills I’m given will be efficacious, thanks to the Food and Drug Administration. The Center for Disease Control tracks and responds to outbreaks before they become epidemics. I’ve been known to get on a plane and fall asleep before takeoff; my security is because the Federal Aviation Authority regulates air traffic. And these are just a handful of ways these systems affect my daily life.

When we think about caring for our neighbors, we think about local churches, and charities—systems embedded in our communities. But I see these technological systems as one of the main ways that we take care of each other at scale. It’s how Americans care for all three hundred million of our neighbors, rich or poor, spread over four million square miles, embedded in global supply chains.

What’s more, we can collectively fund systems that even the richest, most self-sufficient people couldn’t create for themselves, and we use them to serve the common good. When I look at my phone to decide if I need an umbrella, the little blue dot that says where I am is thanks to the network of Global Positioning System satellites operated by the United States Air Force, and the weather is the result of a $5.1 billion federal investment in forecasting, for an estimated $31.5 billion dollars of benefit in saving lives, properties, and crops (and letting me know I should wear a raincoat).

If I were to make a suggestion for how technology could be used to improve our democracy, I would want to make these systems more visible, understandable, and valued by the general public. Perhaps a place to start is with the system that is the ultimate commons—our shared planet. One way that we can interact with it is through citizen science projects: collecting data about our local environment to help build a larger understanding of of anthropogenic climate change. The late scientist and activist Ursula Franklin wrote “Central to any new order that can shape and direct technology and human destiny will be a renewed emphasis on the concept of justice.” If we want to use technology to make democracy better, we can start with the systems that we use to make it more just.


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

Democracy Has a Design Problem
May 17th, 2017, 12:35 PM

Technology alone can’t save democracy. When technology is designed and used well, it can make it easier for people to participate in elections and other activities of civic life. But when it’s not, technology that promises to help ends up being harmful.

Some tools or programs meant to improve access to information are only available to people who are comfortable with technology, who have smartphones, and can afford good data plans.

What happens to the people who get left behind?  

In the weeks before the November election, the Center for Civic Design followed voters around the country in a research study. We wanted to know how they learned about the candidates and issues they would vote on, especially for the local contests that get less attention. Whether they were deeply engaged in following the election or not, they all felt immersed in a “buzz” of opinions and news that left them feeling more anxious than informed. Some worried about being able to trust anything they read. Others simply felt overwhelmed and avoided social media.

If we want technology to help connect people with their government, we have to design it with a human face. It’s really very simple: if you don’t include a wide range of people in the design process, the richness and variety of their experiences are not considered in the final product.

When we have a more complete picture of the people, it’s easier to see the social impact of design decisions, and harder to build inadvertent stupidity into the assumptions and algorithms that go into creating technology.

At Civic Design, whether we are redesigning a voter registration form or researching barriers to participation, all of our projects start with listening. We hear both small and big things. For example, when we talk to new voters, they are often eloquent about democracy and the importance of giving everyone a voice. They want to know what is on their ballot and how the decisions they make will affect them. But they are less sure of the mechanics of participation.

For example, when we tell first time voters and new citizens that marking a ballot is like taking a standardized test, we shouldn’t be surprised when they ask how long they have to be at the polling place. Because their experience is that tests--and most government appointments--take much longer than the few minutes it should take to vote. As one person put it, “What exactly do you do when you go to the polling place?”

Understanding the perspectives of many different people is important because technology is not neutral. It includes all of the assumptions and blind spots of the people who create it. It’s important to question those assumptions, and to ask the hard questions about how civic technology can be useful and inclusive, helping everyone participate.

We need tools that demystify the act of voting in simple, clear language that can can help bridge the civic literacy gap.  This starts by designing the entire election experience and all of the materials from voter registration to ballots so they are easier to read and use. One of the saddest things we heard in a research study was a young voter who said, “I don’t know too much about voting. That’s why I stopped doing it.”

Open civic datasets are already making official information about elections widely available. Voters in our study last fall found information about who is on the ballot, how to contact candidates or elected officials, and the location of the nearest polling place in sites like Facebook and Google Search. This extends the reach of the elections office, making personalized, accurate, timely information part of the everyday experience. How do we make sure these tools are easy to use and available for every community?

We also need to ask who might be left out if participating in civic life requires digital literacy and access to the network? Will the information be available on older devices or through low-tech text messaging (or even in print)? Will the tools we create be accessible for people with disabilities, people with low literacy, and people with low digital literacy? What data will be collected about the people who access these tools, and how will it be used?

Most importantly, the new tools of democracy must meet people where they are, inform instead of overwhelm, and invite people to participate in their own way.


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

Freeing Technology From the Pace of Bureaucracy
May 16th, 2017, 12:35 PM

Technology can be powerful, but it isn’t inherently good or bad. Just as a hammer isn’t inherently good or bad; what matters is how it’s used. Are we using the tool to build or to destroy?

Technology can be a weapon against democracy. Fake news, fabricated for virality, spreads harmful propaganda at the speed of a share. Governments use technology to violate the privacy of law-abiding citizens. Bad actors have influenced elections and broken into our Defense Department through our inboxes.

But if civic engagement fuels democracy, technology can be a savior, too. Technology helped us to register more voters in 2016 than ever before in American history. Technology has empowered outsider candidates to raise funds, compete, and win against elite party heavyweights. Open data policies and portals provide free, up-to-date access to valuable information about communities and government, and citizens are using it to build businesses and to hold government accountable. An unprecedented number of citizens are taking a stand through digital petitions and using smartphone apps to contact their elected officials. We may not like all the outcomes, but more people are getting involved in democracy through tech.

Technology also improves how well our democracy works for its citizens. Government services can and should be delivered as efficiently and effectively as the technology you use to get a ride or order dinner. In redesigning and reengineering digital services and improving infrastructure, governments are making strides toward this goal. Applying for citizenship, getting VA benefits and food stamps, and small-business permitting are just a few examples of transactions with federal, state, and local governments that are improved with the help of talented technologists who choose to work in public service. Improving digital services builds trust in government’s value and purpose through delivery on its promises.

Despite progress, our democracy struggles to keep pace with technological change. We face ever-evolving security concerns; inefficient and outdated software, processes, and equipment; and a lack of qualified professionals in government to fix it all. The technology industry must become more democratic too. A lack of representation means we solve certain problems with tech and ignore those we cannot see or understand. Poor access to technology resources and education compounds existing inequalities. Technologically illiterate citizens are more vulnerable to hacking and misleading digital propaganda, as well as exclusion from democratic processes that are becoming more tech-oriented.

We need citizens to push for solutions. We need elected officials at every level of government to invest in equitable access to technology and tech education; improved security; open data; and effective, efficient digital services.

Technological threats to democracy will always exist. Indeed, as the tools to destroy democracy become more powerful, these threats will proliferate and grow. But that same powerful technology used to empower an engaged citizenry remains our best tool to build democracy, too.


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

Cyberwar Is Officially Crossing Over Into the Real World
May 16th, 2017, 12:35 PM

The devastating effects of a massive cyberattack are no more confined to a computer network than any other action carried out online. People use the computers and the internet all the time to make things happen in the physical world.

A cyberattack isn’t just a cyberattack. It’s an attack.

Hospitals, pharmacies, and major corporations like FedEx and the Spanish telecommunications giant Telefonica were among the 200,000 victims hobbled by a global ransomware attack on Friday, which locked people’s computers and demanded Bitcoin payment in exchange for access. In the United Kingdom, some hospitals canceled procedures and other appointments as a result. The software security firm Symantec found that people paid ransoms totaling about $54,000 in the attack, though officials strongly caution against paying such ransoms.

Among the many questions prompted by the fallout of the attack is an increasingly urgent one: At what point will a cyberattack prompt a more traditional form of retaliation? More importantly: When should it?

Scholars have been asking this question for years, but the ubiquity of networked computing and a growing threat of sophisticated cyberattack has made it all the more pressing in recent months.As the rules of war are adapted to the internet age, determining who is responsible for massive, disruptive cyberattacks has made foreign policy even more complex. And the attribution of cyberattacks is a messy, time-consuming business.

The public has lost confidence in officials’ ability to pinpoint the origins of a cyberattack, as Kaveh Waddell wrote for The Atlantic earlier this year. “Mistrust of attribution would make hacking easier, since it means retribution is harder,” Nicholas Weaver, a professor and security researcher at the University of California, Berkeley, told Waddell at the time. “You need to have attribution for retribution, both to know that you are retaliating against the right actor and to convince the public you are justified in doing so if it is a public retaliation.”

But attribution of cyberattacks is often “extremely difficult and, in many cases, impossible to achieve,” Dimitar Kostadinov wrote for the Infosec Institute in 2013. “However, the law of war requires that the initial cyber attack must be attributed before a counterattack is permitted.”

It’s still unclear who was behind last week’s attack, which hobbled systems in more than 150 countries. But private security firms and intelligence officials now believe there’s evidence to suggest that hackers with ties to North Korea were involved.

Here’s another complicating factor: Civilians and state actors can now access and use the same grade of online weaponry—in a cyberattack, that means computer code. And in some cases, civilians might inadvertently carry out an attack. A nationwide AT&T outage in the United States in March seems to have been accidental, for example. But the ransomware attack last week was clearly designed to wreak havoc.

An animated map, produced by the security software company Symantec, shows how the ransomware attack spread across the planet. (Symantec)

The computer code used to carry out Friday’s attack had striking parallels to the code used in three earlier high-profile attacks, including the hacking of Sony Pictures in 2014. The similarities were first pointed out by Neel Mehta, a security researcher at Google. The link to North Korea is, for now, speculative. It may take months to say with certainty who is responsible for the attack. Hackers routinely borrow sections of code from one another, in some cases as a way to throw investigators off their trail.

This time, the attackers used cyberweapons that were stolen from the United States National Security Agency and leaked online last month. Microsoft said at the time that it had already patched the vulnerabilities exposed as a result of the theft—but the speed and scale of the WannaCry ransomware attack suggests that many networks failed to upgrade their systems.

“Any unpatched Windows computer is potentially susceptible to WannaCry,” Symantec wrote in a blog post on Monday. “Organizations are particularly at risk because... WannaCry has the ability to spread itself within corporate networks without user interaction.”

Most responses to cyberattacks are still passive: systems are patched, cybersecurity experts offer lessons of how to protect against the next attack. That’s largely due to the aforementioned attribution problem, as David E. Graham wrote in his paper “Cyber Threats and the Law of War” in 2010.

“While a victim state might ultimately succeed in tracing a cyber attack to a specific server in another state, this can be an exceptionally time consuming process, and, even then, it may be impossible to definitively identify the entity or individual directing the attack,” he wrote. “For example, the ‘attacker’ might well have hijacked innocent systems and used these as ‘zombies’ in conducting attacks.”

To complicate matters further, any sort of pre-emptive self-defense strike would be difficult to justify in the case of cyberwarfare, because it’s nearly impossible to anticipate a cyberattack.

The United States has, however, used its own cyberweapons to disrupt material weapons testing. As The New York Times revealed in March, the United States has, since 2014, intensified a secret campaign to sabotage North Korea’s frequent missile tests.  

That campaign of sabotage demonstrates as well as anything the startlingly short distance between cyberwar and what some might call “actual” war. Disrupting North Korea’s tests, as William J. Perry, who was a secretary of defense in Bill Clinton’s administration, said at a recent presentation in Washington,  would be “a pretty effective way of stopping their ICBM program,” the New York Times reported.

Computer code does all sorts of things in the world. Only a small portion of it stays within the realm of computing. Tweets don’t just flutter around on Twitter; they have the potential to shape geopolitical relations. Taxis are summoned by smartphones and materialize on city streets. A single click on Amazon will bring all manner of goods to your doorstep with in days.

It remains tempting to draw a line between online and offline, between the internet and the “real world.” But the reality is: That line is mostly an illusion. It’s foolish to assume that the wars that are fought online will remain confined to the internet.

The Weird Thing About Today's Internet
May 16th, 2017, 12:35 PM

Hello. It’s my first day back covering technology for The Atlantic. It also marks roughly 10 years that I’ve been covering science and technology, so I’ve been thinking back to my early days at Wired in the pre-crash days of 2007.

The internet was then, as it is now, something we gave a kind of agency to, a half-recognition that its movements and effects were beyond the control of any individual person or company. In 2007, the web people were triumphant. Sure, the dot-com boom had busted, but empires were being built out of the remnant swivel chairs and fiber optic cables and unemployed developers. Web 2.0 was not just a temporal description, but an ethos. The web would be open. A myriad of services would be built, communicating through APIs, to provide the overall internet experience.

The web itself, en toto, was the platform, as Tim O’Reilly, the intellectual center of the movement, put it in 2005. Individual companies building individual applications could not hope to beat the web platform, or so the thinking went. “Any Web 2.0 vendor that seeks to lock in its application gains by controlling the platform will, by definition, no longer be playing to the strengths of the platform,” O’Reilly wrote.

O’Reilly had just watched Microsoft vanquish its rivals in office productivity software (Word, Excel, etc.) as well as Netscape: “But a single monolithic approach, controlled by a single vendor, is no longer a solution, it's a problem.”

And for a while, this was true. There were a variety of internet services running on an open web, connected to each other through APIs. For example, Twitter ran as a service for which many companies created clients and extensions within the company’s ecosystem. Twitter delivered tweets you could read not just on twitter.com but on Tweetdeck or Twitterific or Echofon or Tweetbot, sites made by independent companies which could build new things into their interfaces. There were URL shortening start-ups (remember those?) like TinyURL and bit.ly, and TwitPic for pictures. And then there were the companies drinking at the firehose of Twitter’s data, which could provide the raw material for a new website (FavStar) or service (DataSift). Twitter, in the experience of it, was a cloud of start-ups.

But then in June of 2007, the iPhone came out. Thirteen months later, Apple’s App Store debuted. Suddenly, the most expedient and enjoyable way to do something was often tapping an individual icon on a screen. As smartphones took off, the amount of time that people spent on the truly open web began to dwindle.

Almost no one had a smartphone in early 2007. Now there are 2.5 billion smartphones in the world—2.5 billion! That’s more than double the number of PCs that have ever been at use in the world.

As that world-historical explosion began, a platform war came with it. The Open Web lost out quickly and decisively. By 2013, Americans spent about as much of their time on their phones looking at Facebook as they did the whole rest of the open web.

O’Reilly’s lengthy description of the principles of Web 2.0 has become more fascinating through time. It seems to be describing a slightly parallel universe. “Hyperlinking is the foundation of the web,” O’Reilly wrote. “As users add new content, and new sites, it is bound into the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.”

Nowadays, (hyper)linking is an afterthought because most of the action occurs within platforms like Facebook, Twitter, Instagram, Snapchat, and messaging apps, which all have carved space out of the open web. And the idea of “harnessing collective intelligence” simply feels much more interesting and productive than it does now. The great cathedrals of that time, nearly impossible projects like Wikipedia that worked and worked well, have all stagnated. And the portrait of humanity that most people see filtering through the mechanics of Facebook or Twitter does not exactly inspire confidence in our social co-productions.

Outside of the open-source server hardware and software worlds, we see centralization. And with that centralization, five giant platforms have emerged as the five most valuable companies in the world: Apple, Google, Microsoft, Amazon, Facebook.

Market Capitalization for Apple (AAPL), Amazon (AMZN), Facebook (FB), Google (GOOGL), and Microsoft (MSFT), May 14, 2007 to present

In mid-May of 2007, these five companies were worth $577 billion. Now, they represent $2.9 trillion worth of market value! Not so far off from the combined market cap ($2.85) of the top 10 largest companies in the second quarter of 2007: Exxon Mobil, GE, Microsoft, Royal Dutch Shell, AT&T, Citigroup, Gazprom, BP, Toyota, and Bank of America.

And it’s not because the tech companies are being assigned astronomical price-to-earnings ratios as in the dot-com bust. Apple, for example, has a PE ratio (17.89) roughly equal to Walmart’s (17.34). Microsoft’s (30.06) is in the same class as Exxon’s (34.36).

Massive size has become part and parcel to how these companies do business.“Products don't really get that interesting to turn into businesses until they have about 1 billion people using them,” Mark Zuckerberg said of WhatsApp in 2014. Ten years ago, there were hardly any companies that could count a billion customers. Coke? Pepsi? The entire internet had 1.2 billion users. The biggest tech platform in 2007 was Microsoft Windows and it had not crossed a billion users.

Now, there are a baker’s dozen individuals products with a billion users. Microsoft has Windows and Office. Google has Search, Gmail, Maps, YouTube, Android, Chrome, and Play. Facebook has the core product, Groups, Messenger, and WhatsApp.

All this to say: These companies are now dominant. And they are dominant in a way that almost no other company has been in another industry. They are the mutant giant creatures created by software eating the world.

It is worth reflecting on the strange fact that the five most valuable companies in the world are headquartered on the Pacific coast between Cupertino and Seattle. Has there ever been a more powerful region in the global economy? Living in the Bay, having spent my teenage years in Washington state, I’ve grown used to this state of affairs, but how strange this must seem from from Rome or Accra or Manila.

Even for a local, there are things about the current domination of the technology industry that are startling. Take the San Francisco skyline. In 2007, the visual core of the city was north of Market Street, in the chunky buildings of the downtown financial district. The TransAmerica Pyramid was a regional icon and had been the tallest building in the city since construction was completed in 1972. Finance companies were housed there. Traditional industries and power still reigned. Until quite recently, San Francisco had primarily been a cultural reservoir for the technology industries in Silicon Valley to the south.

But then came the growth of Twitter and Uber and Salesforce. To compete for talent with the big guys in Silicon Valley, the upstarts could offer a job in the city in which you wanted to live. Maybe Salesforce wasn’t as sexy as Google, but could Google offer a bike commute from the Mission?

Fast-forward 10 years and the skyline has been transformed. From Market Street to the landing of the Bay Bridge, in the swath known as South Market or, after the fashion of the day, SOMA, has been reshaped completely by steel and glass towers. At times over the last decade, a dozen cranes perched over the city, nearly all of them in SOMA. Further south, in Mission Bay, San Francisco’s mini-Rust Belt of former industrial facilities and cargo piers became just one big gleam of glass and steel on landfill. The Warriors will break ground on a new, tech industry-accessible basketball manse nearby. All in an area once called Butchertown, where Mission Creek ran red to the Bay with the blood of animals.

So, that’s what I’ll be covering back here at The Atlantic: technology and the ideas that animate its creation, starting with broad-spectrum reporting on the most powerful companies the world has ever known, but encompassing the fringes where the unexpected and novel lurk. These are globe-spanning companies whose impact can be felt at the macroeconomic scale, but they exist within this one tiny slice of the world. The place seeps into the products. The particulars and peccadilloes from a coast become embedded in the tools that half of humanity now finds indispensable.

Protecting the Public Commons
May 16th, 2017, 12:35 PM

The debate about the role technology plays in society is as old as humankind’s ability to use tools and techniques to change our world. The technologies we have in our hands today would be magic to our forefathers, from gene editing to spacecraft to the smartphone you’re likely using to read this article.

The impact of information technology on democracies is a comparatively younger concern, driven by the quicksilver pace of innovation and invention in the minds, labs, and garages of people around the globe, as well as the disruption of the institutions that held monopolies on the production and distribution of information.

For the billions of humans who are now connected to the broader world by mobile devices, our experience is increasingly mediated by screens animated by endless rivers of news, livestreams, and entertainment. Our feeds are personalized not only by our individual choices about media outlets but the algorithmic determinations of technology companies that may placed commercial interests before public goals. Our public squares are hosted on private platforms that weren't designed with civic good in mind.

Disintermediation, dystopia, and dismay are the words of the day, eroding the last of the romantic dreams of a better world, built anew in virtual spaces and places. “Digital dualism” is finally on life support, replaced by a dawning recognition that the distinction between offline and online has collapsed. Instead, we face pervasive surveillance enabled by the growth of cameras, sensors, connected devices, and data collection in our communities.

Despite systemic issues in education, criminal justice, and elections, the United States remains an inherently open society, where freedom of speech, information, association, and assembly are baked into Americans’ understanding of their culture and enshrined in statute.

As we move into the next century, however, we are faced with renewed challenges around our politics, as partisan polarization expresses itself not only in party affiliation but through media consumption and the accountability journalism that we rely upon to make markets and governments transparent.

One of the biggest risks to our democracy today is the crisis in local news, as newspaper publishers have lost half of their employment since 2001. Digital outlets have hired some of those journalists, but the platform press of Silicon Valley now controls the means of production, distribution, and the vast majority of the advertising revenue associated with publishing online.

Open questions about what will replace local outlets are coupled with historic lows in American trust in government and media institutions. There will be no singular solution to the complex phenomena posed by these changes. What the public-education system, technology companies and publishers can do is to invest in civics education and digital infrastructure, from broadband Internet to public data feeds. Facebook’s addition of civic features that prompt people to register to vote and participate in state and local government is an important step forward. Every member of civil society and institution has a role in informing communities about how government works. A core component of a high school education should include teaching people how to judge risk, statistical literacy, and how to exercise our rights to access public information.

These aren’t new ideas, but making progress is more urgent in an era where our ability to make public policy based upon evidence is in question. At a time when the biggest risks to open government data are political, we must not only protect the integrity of the vast public commons of knowledge that have been built up over the years but the continued relevance of shared facts based upon high-quality statistical data, from our Census to our scientific agencies.

Cities, states, and the federal government must continue to invest not only in opening public information to the public online but partnering with communities to apply it in the public interest, using 21st-century tools to reform 20th-century institutions by honoring the 18th-century philosophies that inspired our nation’s founding. By doing so, we will embrace what has always made America great: our ability to not only adapt in the face of technological change but to adopt the tools of the present to rebuild our communities in preparation for an uncertain future.


This article is part of a collaboration with the Markkula Center for Applied Ethics at Santa Clara University.

Skydiving From the Edge of Space
May 16th, 2017, 12:35 PM

On May 8, 2013, Alan Eustace, then the 56-year-old senior vice president of knowledge at Google, jumped from an airplane18,000 feet above the desert in Coolidge, Arizona. Anyone watching would have witnessed an odd sight: Eustace was wearing a bulky white space suit—the kind nasa astronauts wear. He looked like a free-falling Michelin Man.

Listen to the audio version of this article:Download the Audm app for your iPhone to listen to more titles.

Through his giant space helmet and oxygen mask, Eustace could see the ground stretched out for miles. But the view wasn’t his main concern. He hadn’t quite worked out how to control the space suit, which, unlike a typical skydiving suit, weighed about 265 pounds and was pumped full of pressurized air. Eustace, an experienced skydiver, knew how to shift his body to change direction or to stop himself from spinning—a problem that, if uncorrected, can lead to blackout, then death. But when he started to rotate—slowly at first, then faster and faster—his attempts to steady himself just made things worse. He felt like he was bouncing around inside a concrete box.

At 10,000 feet, Eustace pulled a cord to open his parachute. Nothing happened. Then he tried a backup cord. That one didn’t work either. Eustace knew better than to panic: Three safety divers had jumped with him to monitor his fall. Within seconds, one of the divers reached across Eustace and yanked open the main chute.

All Eustace had to do now was depressurize his suit, which would deflate it and allow him to steer himself toward the landing area. He reached for a dial on the side of the suit and turned it. Nothing happened. With the suit still pressurized, Eustace couldn’t extend his arms overhead to grab the handles that controlled the chute. He began slowly drifting off course. Soon he lost sight of the safety divers. He tried to radio for help, but got no response. He now had a more pressing problem: As he approached the ground, he saw that he was headed straight for a giant saguaro cactus. Unable to maneuver his chute, he leaned as far to the right as he could and just managed to avoid the cactus, instead landing headfirst in the sand.

Eustace’s jump from 18,000 feet above Coolidge, Arizona, in May 2013—his first test
of the space suit in action (Daniel Blignaut)

He craned his neck to look around. The suit was still pressurized, which meant that he didn’t have enough flexibility to take his helmet off to breathe. He tried his radio again. Still dead. He knew the safety divers would have alerted rescuers that he’d gone off course. He just didn’t know how far off course he’d gone. He calculated that he had two hours of oxygen left in his tank. If he sat still and didn’t panic, he should have enough to survive until the rescue team found him. His other option was to try depressurizing the suit again. But if that didn’t work, he’d have wasted a significant amount of oxygen in the effort. He decided to wait until he had just 15 minutes of oxygen left. By that point, he would be desperate enough to try anything.

The sun beat down as Eustace lay by the cactus, watching the meter on his oxygen tank.

Twelve minutes and what felt like an eternity later, he heard the sound of an approaching helicopter. Oh good, he thought, relaxing. I’m nowhere near dead.

Which was fortunate, because this was only a practice round. What Eustace was gearing up for was something much more dangerous: a jump from seven and a half times the altitude, the highest ever attempted. A skydive from the edge of space.

Alan Eustace at home in Mountain View, April 2017 (Ian Allen)

The whole thing began innocently enough. Eustace was sitting in his office at Google’s headquarters in Mountain View, California, one day in late 2008 when his boss Sergey Brin dropped by. Brin knew Eustace had skydived recreationally in the past, and wanted to know whether he thought it would be possible for someone to jump out of a Gulfstream, a large, expensive private jet that Brin sometimes used.

Brin had already asked around, but almost everyone he’d consulted—Gulfstream pilots, military skydivers, even the company that makes the jet—had advised against it. Gulfstreams fly at much higher speeds than typical jump planes, so fast that experts worried anyone exiting midair would risk getting sucked into the engine, or hitting the tail of the plane, or getting burned to death by the exhaust.

Eustace wasn’t a jet pilot, or a professional daredevil. He was an engineer from Florida who had designed computer-processing units for 15 years in Palo Alto before Larry Page persuaded him to join his growing company over breakfast one morning in 2002. Eustace hadn’t been skydiving in 26 years, but the idea intrigued him: He wasn’t convinced that the skeptics were right. As an engineer, he preferred to approach a problem from first principles. If it was impossible, why? What was the trajectory of the exhaust? Would the FAA grant approval to open the door mid-flight, which would require circumventing the user manual?

Eustace spent the next few months trying to answer these questions, in between projects that demanded his more immediate attention. He eventually lined up a skydiver to try a jump out of a Cessna Caravan, another high-speed aircraft. Luckily, the skydiver landed without incident. What’s more, he filmed himself. When Eustace brought Brin the footage, Brin seemed surprised that he had followed up. But by this point, Eustace was hooked—and he was starting to consider trying the jump himself. All he’d have to do was get reacquainted with the equipment and do a couple of test jumps.

In August 2010, Eustace took a few days off and went down to the suburbs of Los Angeles, where he did six practice jumps with an instructor, a professional stunt skydiver named Luigi Cani. The two hit it off—Cani was warm and friendly, and seemed up for anything. He loved the Gulfstream idea.

A few months later, Eustace was back home in Mountain View when his phone rang. It was Cani. He wanted to know whether Eustace had heard about a guy named Felix Baumgartner, who was after an even bigger challenge: He was trying to beat the high-altitude-skydiving record with a jump from the upper reaches of the stratosphere, more than 100,000 feet in the air. Cani had found a sponsor to launch a competing effort, and wondered whether Eustace could advise him on the type of equipment he’d need.

Eustace was delighted. He was sure Baumgartner was way ahead—he had backing from the energy-drink company Red Bull, which had hired more than three dozen team members with backgrounds in nasa, the Air Force, and the aerospace industry—but he liked Cani, and wanted to see him create some healthy competition. He agreed to help in any way he could. But before Cani’s effort could kick off, his funding fell through.

Eustace considered this news. He led a quiet, comfortable life. He wasn’t after publicity or adrenaline. But this was the engineering challenge of a lifetime. Forget the Gulfstream. He could attempt the stratosphere jump himself, and fund it with his own savings. He thought for a few months and called Cani to ask for his blessing. Cani laughed, amused. Go for it, he said.

The atmosphere is divided into five layers. The higher you go, the thinner the air, until eventually you hit outer space. The layer closest to Earth, the troposphere, is where weather occurs. The next layer, between 33,000 and 160,000 feet above sea level, is the stratosphere. It marks the beginning of what’s known as “near space”—the threshold between the planet we experience on the ground and the mysteries of the universe beyond.

Prior to the onset of the space race in the late 1950s, much of the scientific study into high altitudes was focused on the stratosphere. Starting in the 1930s, scientists used high-altitude balloons to gather meteorological data and document various changes in the upper atmosphere. Then, in 1960, a United States Air Force captain named Joseph Kittinger rose 102,800 feet in a gondola suspended from a helium balloon—and jumped. Kittinger was part of Project Excelsior, a pre-space-age military operation designed to study the effects of high-altitude bailouts. An earlier attempt, from 76,400 feet, had almost killed him: His equipment had malfunctioned and he’d lost consciousness; he was saved only by his automatic emergency parachute. His next jump, from 74,700 feet, had gone better. This one—his third—set a high-altitude-skydiving record that would remain in place for more than 50 years.

nasa would soon send a man into orbit, and ambitions would turn to the moon. The expansion of the space program coincided with a series of catastrophic balloon accidents, and exploration into the stratosphere was largely abandoned.

That is, until 2010, when Baumgartner announced that he was going after Kittinger’s record, with the backing of none other than Kittinger himself—plus a hefty sponsorship from Red Bull. Plenty of people had contacted Kittinger over the years, wanting him to help them break the record, but Baumgartner was the first to come with a sound scientific support system, courtesy of Red Bull’s team of professionals. The effort, amplified by Baumgartner’s high-octane personal life, attracted a lot of press.

Eustace was an unlikely competitor. The son of an aerospace engineer for Martin Marietta (a forerunner of Lockheed Martin), Eustace had grown up loving planes, but his first time jumping out of one—18 years old, dragged along by his best friend—he felt less exhilaration than ambivalence. The equipment was primitive—coveralls, thick boots, military-grade parachutes—and Eustace landed hard. The experience was a blur. He didn’t know whether he’d done it right, and he certainly didn’t plan to do it again.

Then the instructor handed him his evaluation. His friend’s jump was terrible, but the instructor had deemed Eustace’s “perfect.” So when his friend wanted to go back a week later, Eustace went along. He enjoyed it much more the second time: He was less nervous, and could actually remember what he had done. He went again, and again, and after his 10th jump, he invested in a higher-performance parachute. Then he mastered a stand-up landing, instead of a drop-and-roll. He learned to dive, swoop, somersault, slow down, and speed up, until skydiving became less like falling than like flying.

Eustace (center) skydiving with friends in 1981, while getting his doctorate in computer science at the University of Central Florida. (Tom Plonka)

Eustace began skydiving as often as he could manage between classes at the University of Central Florida, where he majored in computer science and went on to get his doctorate. But as his career took off, Eustace invested less and less time in the sport. Eventually, he sold his equipment.

Skydiving from the stratosphere seemed like a drastic way to get back into practice. But the more he thought about it, the harder it was for him to imagine someone else doing it. His day job—overseeing Google’s engineers—was all about building technology to solve problems and move people forward. Breaking the record would be a personal challenge, but more important, it would be a chance to push the boundaries of human experience. First, he’d need a suit.

The list of things that can go wrong when parachuting from extreme heights is nearly endless. The stratosphere is cold, for one—the temperature can reach more than 100 degrees below zero. The air is also about 1,000 times thinner than at sea level, which means that without a pressurized suit, bodily fluids start to boil, creating gas bubbles that lead to mass swelling.

The environment is so hostile that high-altitude jumpers have to bring their own. For his record-breaking jump, Kittinger wore a partial-pressure suit—a close-fitting garment with a network of thin inflatable tubes that squeeze the body to make up for the decrease in atmospheric pressure—on top of four layers of clothing for warmth. On the way up, which took about an hour and a half, he rode in an open gondola that contained an oxygen supply, a communications system, altimeters, and the power source for his electrically heated gloves—everything he needed to survive prolonged exposure to the altitude.

But gondolas present their own risks. In 1962, a Soviet air-force colonel named Pyotr Dolgov hit his head on the side of his gondola when he jumped from almost 94,000 feet, cracking the visor of his helmet and accidentally depressurizing his suit. He died before he hit the ground. A few years later, an amateur skydiver from New Jersey named Nick Piantanida was unable to switch from the oxygen supply in the gondola to the one attached to his suit when he reached his intended jump height of 123,500 feet, and had to abort the trip. (An unknown equipment malfunction on his next attempt would be fatal.)

Gondolas are also heavy. Baumgartner’s team was using one that weighed almost 3,000 pounds. Ditching the gondola not only would be safer, Eustace figured, but would also allow him to start his jump from a greater height.

But nobody had ever attempted a stratosphere jump without one. If Eustace was going to rise 26 miles into the air attached to nothing but a helium balloon, he’d need a suit that would provide the same environmental protections—oxygen, instruments, climate control—that a gondola would. In short, he would need a space suit. The problem was that no one had designed or flown a new space suit in about 40 years. nasa has been using essentially the same version of the Apollo suit since the 1970s—and Eustace couldn’t just borrow one of those. He needed a suit that could survive a slow ascent into the stratosphere and a fast descent, with swift changes in temperature and velocity, and that could also support the weight of a giant parachute.

1 | Balloon equipment module: Connects the balloon to the jumper. The module fires a small explosive to detach the jumper for descent.

2 | Instrument panel: Displays oxygen-tank levels, suit pressure, and altitude.

3 | Depressurization valve: The jumper pulls the safety loop and turns the valve to depressurize the suit, making it easier to steer in preparation for landing.

4 | Parachute handles: Attached to cords that open the main and reserve parachutes.

5 | Equipment-­module chest pack: Contains two oxygen tanks, radios, monitoring devices, and a thermal unit to heat the water that circulates through the suit to keep the jumper warm.

6 | Mountaineering boots: Designed for expeditions on Mount Everest, climbing boots worn under the space suit protect from the extreme cold and can bear a load of more than 400 pounds on landing.

Eustace began to dedicate his nights and weekends to thinking about the design. He was still working 80-hour weeks at Google, but he had a lot of vacation time saved up, and his bosses—Brin and Page—were encouraging. A saying inside the company was that employees should have “a healthy disrespect for the impossible.”

Eustace’s wife, Kathy Kwan, was less enthusiastic. The couple had two daughters, 11 and 16, and she knew the history of the sport. Eustace was so engrossed in the technological challenges that the possibility of death didn’t really enter his mind—any risk, he thought, could be mitigated by enough advance preparation. The couple made an uneasy truce: Kwan would support Eustace’s project, and he would avoid bringing it up—no stratosphere talk at the dinner table. (Kwan politely declined to speak with me, saying she preferred not to dredge up those particular memories.)

In October 2011, a contact in the aviation industry connected Eustace with a married couple named Taber MacCallum and Jane Poynter, co-founders of Paragon Space Development. MacCallum and Poynter had been two of the eight crew members on the famous Biosphere 2 project of the early ’90s, living in a sealed artificial world for two years to determine whether humans could survive in closed ecosystems beyond Earth. They had started Paragon to create biological and chemical life-support systems for hazardous environments, like the deep sea and outer space.

The couple was used to getting calls from people asking all kinds of crazy things: Can you fly me into space? Would it be possible to strap me to a rocket? But this was the first time they’d heard anyone propose a stratosphere jump without a capsule. MacCallum was intrigued enough to set up a call with Eustace, and the two spoke for more than an hour. A week later, Eustace flew down to Paragon’s headquarters, in Tucson, Arizona, and spent a day presenting his idea.

MacCallum and Poynter soon agreed to lead Eustace’s engineering team. They gathered the company’s leading engineers, mechanics, and flight operators to work on the design, and commissioned ILC Dover—the same manufacturing company that makes nasa’s suits—to build a prototype.

Eustace soon began making regular trips to Tucson for testing. The team put the suit in a wind tunnel and a vacuum chamber to determine how it would hold up in free fall. They hung Eustace from a nylon strap and spun him around so he could practice operating his equipment in midair. Next came a series of thermal tests, to ensure the suit could handle subzero temperatures. Eustace was suspended inside a sealed, liquid-nitrogen-cooled chamber for five hours at a time. Small tubes in the suit were supposed to circulate hot water around his limbs and chest to keep him warm. But the tubes ended at the wrists, meaning that, even with a pair of electrically heated mountain-climbing gloves, Eustace’s hands eventually began to freeze. The team gave him a pair of oven mitts to wear on top of the gloves.

A member of Paragon’s engineering team testing how the suit would respond to changes in air pressure (Volker Kern)

In October 2012, a year into Eustace’s work with Paragon, Felix Baumgartner succeeded in breaking Kittinger’s 1960 record, free-falling to Earth from a height of 127,852 feet. Reporters from all over the world came to witness the event, and a live webcast of the jump racked up more than 8 million views. Rather than deter Eustace, Baumgartner’s jump gave him a test case. Shortly after exiting the capsule, Baumgartner entered a dangerous spin. He was able to right himself in time, but Eustace would be less agile in his suit and knew that he would need to figure out how to avoid the same problem.

Eustace and his team began doing dummy drops from airplanes in the Arizona desert. The test dummy, known as ida (for “Iron Dummy Assemble”), was made from welded high-pressure pipes, the kind used in industrial plumbing. She was dropped from various heights, equipped with a parachute that opened at a preset altitude. She spun wildly on her way down. One time, her arms and legs flew off.

The team tried to fix the problem by introducing a drogue—a round parachute about six feet across that is supposed to add stability. The Coolidge jump, in May 2013, was Eustace’s first chance to test the equipment himself. While nearly everything went wrong, the biggest problem remained spin. Eustace began spinning almost immediately after he left the plane, even with the drogue, and the suit was too rigid to allow him to correct himself midair the way he would during a skydive from a lower altitude.

After the Coolidge jump, the team decided to raise the attachment point of the drogue, moving it from the seat of the suit to the back of the neck. That would make Eustace fall at a slight angle, and therefore not spin. To keep his arms from getting tangled up in the strings when the chute deployed, the engineers added a boom that would extend when the drogue opened and keep it at a safe distance from the suit. They called the system saeber.

When the team tested the system on ida from 120,000 feet, her spinning slowed from 400 rpm to 22 rpm, a gentle pirouette. Eustace did more practice jumps, learning to stick out his elbows to correct himself in midair. They were finally ready.

Eustace woke up well before dawn on Friday, October 24, 2014, in a tin shed on an unused strip of land next to the airport in Roswell, New Mexico—a site that had been chosen for its open space and relatively few cacti. The weather was perfect.

He spent two hours sitting in a vinyl recliner behind the shed breathing pure oxygen, to prevent decompression sickness. He drank water and Gatorade. Occasionally he stood and did some stretches to get nitrogen out of his tissues. Then he pulled on a diaper—it would be a long ride up—and was helped into his suit by four team members. They attached two GoPros to his chest and wheeled him out to the launchpad on a dolly.

Kwan had chosen to stay home. The girls had school that day—Eustace and Kwan had decided to keep them on their normal schedule—but had been granted permission to bring their phones to class so they could get updates from the launch site. The Paragon team and a single reporter from The New York Times would be the only onlookers.

The team strapped Eustace to a massive helium balloon—525 feet in diameter when fully inflated, roughly the size of a football stadium—and untethered it from the launchpad. Just like that, Eustace was on his way. He felt relaxed, almost drowsy, as the balloon rose above the airport. He worried for a moment that he might fall asleep and miss the jump.

Left: Inflating the helium balloon that would carry Eustace to the stratosphere. Right: Eustace starting his ascent. (J. Martin Harris Photography)

As Eustace drifted higher, he began to make out landmarks: New Mexico’s White Sands, the Rocky Mountains. Crop circles became tiny specks. Whole states appeared and receded. At 70,000 feet, the sky darkened. Delicate cloud formations appeared below him. Eustace felt like he was floating above a lace doily. At 80,000 feet, the curvature of Earth became visible. He turned his head to look for the moon.

Of course, he was also comparing his flight path to the projections, keeping an eye on the time and the stratospheric winds that were expected to kick in and push him east, and doing a mental rehearsal of the emergency procedures. At one point, Eustace stopped climbing fast enough, so ground control radioed him to let him know that it was releasing two 30-pound ballast weights. Each ballast had its own parachute, and he watched with interest as they fell back to Earth.

After two hours and seven minutes, Eustace reached 135,890 feet. This was float altitude: The balloon had expanded as far as it could, so he would not rise farther. Ground control would now detach him by remote control. The countdown began. On “zero,” Eustace felt the balloon snap and drift off. For a single moment, he felt like he was hovering in midair. He did a backflip. Then he did another.

Then saeber kicked in, launching the drogue and pushing Eustace into a downward position, facing Earth. The stratosphere was quiet as Eustace began free-falling, but soon he could hear the rush of air inside his helmet. He passed 822 miles an hour, breaking the speed of sound. At about 8,300 feet above the ground—after four minutes and 27 seconds of free fall—Eustace deployed his main parachute. Nine and a half minutes later, he landed with a smile on his face. His team rushed over, barely able to contain the whoops and yeahs. The record was his.

The Times reporter’s story would not run until later that day, and Eustace’s reception was decidedly more muted than Baumgartner’s. After he was freed from the suit, he helped clean up the landing site, check the GoPro footage, and wrap up the parachute. That night, the whole team went to a Mexican restaurant in Roswell. Eustace was on his third margarita when he got a text from his sister, who was at a bar in Florida and, by some cosmic coincidence, had bumped into none other than Joseph Kittinger. Recognizing him, she went up to him and said, “Hey, did you know that my brother just broke your record?” Kittinger congratulated Eustace by phone the next day and invited him to have a beer sometime. Baumgartner, too, released a statement congratulating him.

The next Monday, Eustace was back behind his desk at Google.

Last December, Eustace’s suit was put on display at the Smithsonian’s National Air and Space Museum in Chantilly, Virginia. In the two and a half years since the jump, Eustace has given countless talks about the suit—at nasa, the Jet Propulsion Laboratory, SpaceX. But most people still don’t know that Eustace broke Baumgartner’s record. “If someone says, ‘Hey, this is the guy who holds the record for the highest-altitude jump,’ ” he told me, “people will usually just turn to me and ask, ‘Oh, are you Felix?’ ”

He retired from Google a few months after the jump to focus on his own projects—including consulting for a space-tourism company called World View, which MacCallum and Poynter helped form while Eustace was working on his jump. Ventures including SpaceX and Virgin Galactic have been working on ways to send civilians into space on rockets. World View is building an eight-person spacecraft that will float up into the stratosphere using a helium balloon, then detach and float back down with the help of a steerable parachute, like the one Eustace used. The trip will be significantly cheaper than going into space—$75,000 a ticket compared with about $250,000 for a ride with Virgin Galactic—which, if not quite democratizing the experience, will at least give more people an opportunity for perspective-altering views.

Inside World View’s facility in Tucson sits a full-size replica of the Voyager capsule. It has four big windows and a bubble roof, so everyone on board can have a 360-degree view of space. The capsule has a small bathroom, Wi-Fi, and a bar. It will be a five-hour flight in total: one and a half hours up, then a couple of hours floating at about 100,000 feet before the descent. Eventually, World View hopes to hold wine tastings and photography classes in the stratosphere. The company is targeting late 2018 for its first flight.

Eustace isn’t planning to go—he feels it would be anticlimactic. He had hoped to venture out in his space suit again, but ultimately decided that another jump would put too much strain on his family. So he takes every other chance he gets to launch himself skyward.

A few years after he started working as an engineer, Eustace bought a bright-yellow Lockwood AirCam, a small two-seater with an open cockpit. He took me to see it one blustery afternoon in December, in a private hangar at the San Carlos Airport. We drove there from Eustace’s house in his Tesla, to which he had recently upgraded, at Kwan’s urging, from a 2002 Honda Accord.

Eustace in his AirCam (Ian Allen)

I had confessed earlier that I was terrified of heights. “Just don’t scream too loudly in my ear when we’re up there,” he joked as we pulled up to the hangar. “That could really make us crash.”

We geared up: puffy pants and jackets and heavy helmets. Eustace helped strap me into the back seat, then jumped in the front. After a few radio calls to flight control, we pointed down the runway and took off. The plane lived up to its tagline—slow and low—and at first, it was almost like we were floating in a balloon. But as we got higher, flying over the tops of office buildings, the wind picked up. Although I was wearing gloves, my hands started getting numb. I thought about putting them in my pockets, but didn’t want to let go of the sides of the plane, which I was gripping with all my strength. We rose higher and higher and banked right over the San Francisco Bay. The water glittered below us, the bridge stretching across the horizon.

After about 20 minutes, I heard Eustace’s voice in my ear: “Do you want to take control?” There was a small control stick in front of me, which Eustace had shown me how to use before we took off—a slight pull to go higher, a push sideways to turn. Still holding on to the side of the plane with one hand, I used my other to tilt the stick slightly to the right. The plane tilted to the right. “Oh!,” I said, in genuine surprise, forgetting my fear for a moment. “I’m flying!”

Eustace just laughed. “Go higher!” he said.