Technology | The Atlantic
If Buddhist Monks Trained AI
June 29th, 2017, 02:45 AM

The Harvard psychologist Joshua Greene is an expert in “trolleyology,” the self-effacing way he describes his research into the manifold variations on the trolley problem. The basic form of this problem is simple: There’s a trolley barreling towards five people, who will die if they’re hit. But you could switch the trolley onto another track on which only a single person stands. Should you do it?

From this simple test of moral intuition, researchers like Greene have created an endless set of variations. By varying the conditions ever so slightly, the trolley problem can serve as an empirical probe of human minds and communities (though not everyone agrees).

For example, consider the footbridge variation: You’re standing on a footbridge above the trolley tracks with a very large person, who, if you push him or her, can stop the trolley from killing the five people.  Though the number of lives saved is the same, it turns out that far more people would throw the switch than push the person.  

But this is not quite a universal result. During a session Wednesday at the Aspen Ideas Festival, which is co-hosted by the Aspen Institute and The Atlantic, Greene joked that only two populations were likely to say that it was okay to push the person on the tracks: psychopaths and economists.

Later in his talk, he returned to this, however, through the work of Xin Xiang, an undergraduate researcher who wrote a prize-winning thesis in his lab titled “Would the Buddha Push the Man of the Footbridge? Systematic Variations in the Moral Judgment and Punishment Tendencies of the Han Chinese, Tibetans and Americans.

Xiang administered the footbridge variation to practicing Buddhist monks near the city of Lhasa and compared their answers to Han Chinese and American populations.  “The [monks] were overwhelmingly more likely to say it was okay to push the guy off the footbridge,” Greene said.

He noted that their results were similar to psychopaths—clinically defined— and people with damage to a specific part of the brain called the ventral medial prefrontal cortex.

“But I think the Buddhist monks were doing something very different,” Greene said. “When they gave that response, they said, ‘Of course, killing somebody is a terrible thing to do, but if your intention is pure and you are really doing it for the greater good, and you’re not doing it for yourself or your family, then that could be justified.’”

For Greene, the common intuition that it’s okay to use the switch but not to push the person is a kind of “bug” in our biologically evolved moral systems.

“So you might look at the footbridge trolley case and say, okay, pushing the guy off the bridge, that’s clearly wrong. That violates someone’s rights. You’re using them as a trolley stopper, et cetera. But the switch case that’s fine,” he said. “And then I come along and tell you, look, a large part of what you’re responding to is pushing with your hands versus hitting a switch. Do you think that’s morally important?”

He waited a beat, then continued.

“If a friend was on a footbridge and called you and said, ‘Hey, there’s a trolley coming. I might be able to save five lives but I’m going to end up killing somebody! What should I do?’ Would you say, ‘Well, that depends. Will you be pushing with your hands or using  a switch?’”

What people should strive for, in Greene’s estimation, is moral consistency that doesn’t flop around based on particulars that shouldn’t determine whether people live or die.

Greene tied his work about moral intuitions to the current crop of artificial-intelligence software. Even if they don’t or won’t encounter problems as simplified as the trolley and footbridge examples, AI systems must embed some kind of ethical framework. Even if they don’t lay out specific rules for when to take certain behaviors, they must be trained with some kind of ethical sense.

And, in fact, Greene said that he’s witnessed a surge in people talking about trolleyology because of the imminent appearance of self-driving cars on human-made roads. Autonomous vehicles do seem like they will be faced with some variations on the trolley problem, though Greene said the most likely would be whether the cars should ever sacrifice their occupants to save more lives on the road.

Again, in that instance, people don’t hold consistent views. They say, in general, that cars should be utilitarian and save the most lives. But when it comes to their specific car, their feelings flip.

All these toy problems add up to a (still incomplete) portrait of human moral intuitions, which are being forced into explicit shapes by the necessity of training robots. Which is totally bizarre.

And the big question Greene wants us to ask ourselves before building these systems is: do we know which parts of our moral intuition are features and which are bugs?

The iPhone Was Inevitable
June 29th, 2017, 02:45 AM

A man sits in a chair in front of a small documentary camera crew. He’s trim, dressed in all black. A red notebook sits on his lap. “Here’s what I wrote in 1989,” he says. “This is a very personal object. It must be beautiful. It must offer the kind of personal satisfaction that a fine piece of jewelry brings. It will have a perceived value even when it’s not being used. It should offer the comfort of a touchstone, the tactile satisfaction of a seashell, the enchantment of a crystal.”

Then comes the reveal. He picks up the notebook. We see a sketch: a rectangular slab of glass, all display, except for bezel at the top and bottom. From his pocket, he pulls an iPhone and holds it above the drawing. The similarities are startling.

“We really had it,” he says with a thin laugh. “We definitely had it.”

This is a scene from the forthcoming documentary General Magic, named for the company that attempted to manufacture the device from the notebook. The man is Marc Porat, CEO of the company. He’d recruited two Apple employees, Bill Atkinson and Andy Hertzfeld, who had created the Macintosh. In its earliest iteration, inside Apple, the project had been called Pocket Crystal.

A screenshot from the documentary film General Magic.

After the project was spun out and years of frenzied development, Wired profiled the company in their April 1994 issue. There were 13 million internet users in 1994. There was roughly one cell phone per 100 people on Earth, none of them equipped to do much more than make calls. The first SMS text message had been sent just two years before.

Yet they were convinced that they were making the most important device ever.

“It’s like a lot of different areas are converging on an electronic box that’s in your pocket, that’s with you all the time, that supports you in some way,”  Atkinson told Wired. “That’s going to happen with or without General Magic.”

He was right.

The iPhone launched 10 years ago. The device—and its many, many descendants—is core to how we live. After only a decade, smartphones easily outnumber PCs, despite personal computing’s quarter-century head start. There are 2.5 billion Apple iOS and Android smartphones out there, with that number, as analyst Ben Evans puts it, “heading for 5 billion plus users.” PCs never even cracked 2 billion users and are now drifting downwards.

The iPhone is the single-most successful product of all time. One billion iPhones have been sold. They underpin the most valuable company in history, and have catalyzed a whole new technology industry that’s an order of magnitude larger than the one built around PCs. This came with a major assist from Android, the mobile operating system that Google acquired, and then rebuilt after the iPhone came out. But the iPhone pioneered the market, the user interface, the working form factor, and the app store. And iPhone users drove network upgrades and buildouts by the major wireless carriers across the world because people with the Apple devices consumed so much data relative to other cellphone users.

In short: the iPhone is the Pocket Crystal, and we are all enchanted.

But staring at the 1989 sketch and down at one’s phone, it is hard not to ask: How could the form, appeal, and importance of the device have been apparent 18 years before its appearance?

Was the iPhone, in some way, inevitable?

* * *

If you want to understand the long sweep of tech history that culminated in the iPhone, it’s worth paying a visit to Bill Buxton’s gadget museum. A Microsoft user-interface designer, he’s collected dozens and dozens of interactive devices, and documented them for all to see. Strange keyboards, handheld devices, electronic gloves, touch screens, touch pads, phones, and e-readers.

The General Magic Data Rover 840, a 1998 release, is in the collection. It looks nothing like a Pocket Crystal. Like all the other devices designed to work with General Magic’s software—e.g. the Sony PIC series and Motorola Envoy—the housing is grey and bulky. There’s a stylus, of course, and a grayscale backlit screen. The device is heavily skeumorphic, drawing on real-world analogs for everything. To add a new contact, one had to first go to the “Office,” one of the software’s “rooms,” before pulling up the address book functionality. The settings were located in the “Hallway,” like a thermostat.

General Magic Data Rover 840 (Buxton Collection).

This is not to fault General Magic for creating devices with the technology of the era. Buxton’s collection contains other key precursors to the modern smartphone—and all of them have that teenage awkwardness to them.

There’s the Newton, Apple’s own personal digital assistant, which was released in 1993. The so-called MessagePad looks more like Porat’s sketch, but it relied on shaky handwriting recognition and inadequate battery technology. While Newton improved through the ’90s, it was eventually canceled, and history records it mostly as a flop.

Apple Newton Message Pad 120 (Buxton Collection).

Then there are the various devices that Palm powered. The Palm Pilot, introduced in 1996, became the standard bearer for PDAs, as they were known through the end of the ’90s. They were useful and improved steadily, but never became much more than glorified address books and calendars.

Palm Pilot (Buxton Collection)

“No computer product category has been more ridiculed than the PDA,” wrote Home Office Computing magazine in 1995. “Originally conceived as a tiny digital factotum that would call home, receive faxes, store documents, and send email, the first PDAs from AT&T, Apple, Casio, and Tandy fell far short of expectations.”

That’s how a review of the most intriguing early smartphone, the IBM/BellSouth Simon, begins. It was a straight-up smartphone with a touchscreen—in the mid-’90s. The battery lasted eight hours in standby mode or a single hour in use. It weighed more than a pound. And it cost $899. But it worked better than the rest of the devices out there.

The Simon, a collaboration between IBM and BellSouth Cellular (Buxton Collection).

The Home Office Computing review ends promisingly, or ominously, as it were. “It may be that we're still asking too much of PDAs,” it says. “For example, how can you possibly fit an acceptably large touch screen on an object that’s supposed to fit in your pocket?”

While PDAs floundered through the 1990s, cell phones soared. Nokia became the world’s dominant smartphone maker with rugged, simple devices. It’s easy to forget that Nokia was the cell phone game for many years. In the year the iPhone came out (2007), Nokia sold 437 million phones and had near half of the cell phone market. And yet they never released anything that looked like the Pocket Crystal.

But that’s not to say that they didn’t think about it. In a funereal piece in The Wall Street Journal, former head designer Frank Nuovo rued Nokia’s mistakes.

“More than seven years before Apple Inc. rolled out the iPhone, the Nokia team showed a phone with a color touchscreen set above a single button. The device was shown locating a restaurant, playing a racing game and ordering lipstick,” the Journal narrated. “In the late 1990s, Nokia secretly developed another alluring product: a tablet computer with a wireless connection and touch screen—all features today of the hot-selling Apple iPad.”

Nuovo, clicking through his old slides like General Magic’s Porat paging through his old sketches, echoed the General Magic CEO’s lament. “Oh my God,” he said. “We had it completely nailed.”

So many people had it—and with the backing of the world’s most powerful electronics’ companies—and yet none of them made it.

When Buxton launched his virtual museum six years ago, he told me that it takes two decades for something genuinely new to become a billion-dollar business.

“If what I said is credible, then it is equally credible that anything that is going to become a billion dollar industry in the next 10 years is already 10 years old,” Buxton said. “That completely changes how we should approach innovation. There is no invention out of the blue, but prospecting, mining, refining and then goldsmithing to create something that's worth more than its weight in gold.”

There is no wizard, no singular genius, who comes up with the Next Big Thing, but something like an evolutionary process. Apple’s iPhone business hit a billion dollars in sales in 2008. By 1998, most of the conceptual work thinking through an iPhone-like device had been done.

*  * *

“The iPhone is a deeply, almost incomprehensibly, collective achievement,” Brian Merchant declares in his new biography of the iPhone, The One Device.

“Thomas Edison did not invent the lightbulb, but his research team found the filament that produced the beautiful, long-lasting glow necessary to turn it into a hit product,” Merchant writes. “Likewise, Steve Jobs did not invent the smartphone, though his team did make it universally desirable. Yet the lone-inventor concept lives on.”

In The One Device, Merchant works through the technical achievements, distributing acclaim in and outside Apple. The glass—Gorilla Glass—was a Corning achievement, which had its roots in a half-century-old research project. The multi-touch screen that allowed the entire surface of the glass to become the user-interface has its origins in the European physics organization, CERN. Merchant quotes Buxton saying his lab at the University of Toronto was working on multi-touch in the early 1980s, and that he’d seen an earlier working system at Bell Labs. The winding multitouch trail continues through the University of Delaware, where an electrical engineer named Wayne Westerman created a multi-touch system for typing to ease his own repetitive stress injuries. Apple eventually bought the company and filed patents on the technology with Westerman’s name on them.

One last example, the lithium-ion battery. Merchant provides a pithy genealogy: “The lithium-ion battery—conceived in an Exxon lab, built into a game-changer by an industry lifer, turned into a mainstream commercial product by a Japanese camera maker, and manufactured with ingredients dredged up in the driest, hottest place on Earth—is the unheralded engine driving our future machines.”

Apple’s supply chain for tin, tantalum, tungsten, gold, and cobalt includes no less than 256 refiners and smelters. Look just at the cobalt in lithium-ion batteries and you find a crazy trail that’d leads back primarily to the copper mines of the Congo, and on to smelters in China. China is the biggest consumer of cobalt and 80 percent of it goes to battery production.

And that’s just the stuff inside the phone. There is also the nearly unbelievable story of how much data capacity the various cell phone providers have added, which requires tower after tower of equipment. From 2007 to 2010, when iPhones were only available with an AT&T wireless connection, data traffic on AT&T’s network went up 8,000 percent. And the growth kept going. A Cisco research study found that from 2011 to 2016, when smartphones became far more prevalent, mobile data traffic grew 18-fold. Now, the country hosts an enormous electronic forest: more than 118 thousand towers are now in operation, according to an industry publication.

Underpinning all of these systems are the incredible leaps in computing power (Moore’s law) and energy efficiency (Koomey’s law) that have been hallmarks of the computing revolution. The chip work, alone, represents hundreds of billions of dollars of R&D, not to mention the work on modems and wireless technology by places like Qualcomm.

Merchant follows computer historian Chris Garcia in calling the iPhone a confluence technology: “There are so many highly evolved and mature technologies packed into our slim rectangles, blending apparently seamlessly together, that they have converged into a product that may resemble magic.”

A general-purpose kind of magic, you might say.

* * *

General Magic will probably be written out of the history books as time goes on. The company itself never amounted to much. But look at a list of the people who worked there, and two names jump off the (very distinguished) list: Tony Fadell and Andy Rubin. Fadell led the hardware team that created the iPhone. Rubin led the team that created Android. While Android dominates by market share (87 percent worldwide), the iPhone dominates the profits made from smartphone sales. In any case, together, the two operating systems we can trace back to General Magic have 99 percent share of the smartphone market.

It’s a perfect narrative. A few people in the Apple orbit have the perfect idea. That seed incubates for 15 years until the technology stack catches up, and then two alums of General Magic finally create the object from that original inspired vision.

The only problem is that Fadell has said the iPhone team tried out all kinds of things. They put a scrollwheel on one proto-iPhone. The team had a months-long battle over whether to include a hardware keyboard before Steve Jobs made the decision to go keyboard-less. And Rubin’s team only ditched its hardware keyboard plans after the iPhone came out. If General Magic did have a map of the future, the legend must have been lost somewhere along the way.

The iPhone happened, and we can mark the world as before-and-after. It unlocked a new era of human-computer interaction and human-human interaction. The iPhone is the ur object of our time. A version of it is attached to the vast majority of adults. We sleep with them. We spend more time with them than our children. The success of other technology companies, media empires, romantic relationships, and political campaigns depends on reaching people through them.

Happy 10th Birthday iPhone. Happy 10th Birthday World That the iPhone Made.

Where Not to Use Your Phone
June 28th, 2017, 02:45 AM

As MIT professor and psychologist Sherry Turkle sees it, students are obsessed with perfection and invulnerability. That’s why they will email her their questions instead of coming to office hours.

“As I get famouser and famouser, I post more office hours, and the numbers [of students attending them] come down,” said Turkle, who researches and writes on peoples’ relationship to technology, during a panel at the Aspen Ideas Festival, which is co-hosted by the Aspen Institute and The Atlantic. “What they say is basically, ‘I’ll tell you what’s wrong with conversation, it takes place in real time, and you can’t control what you’re going to say.’” These students are trying to hide their vulnerabilities and imperfections behind screens, she said, and they have a “fantasy that at 2 in the morning I’m going to write them the perfect answer to the perfect question.”

It’s all a sign that we’ve become too dependent on our devices to get us through life, as Turkle sees it. She inveighed not only against email-loving Millennials but also against new moms who would rather sit at home on their phones than go meet each other at the playground.

She points to studies that show that having your phone on the table during mealtimes, even if it’s off, leads to reduced feelings of empathy. To truly turn the tide, Turkle said, there ought to be some no-phone times and places.

They are, uh, most of the times of the day and most places, including when you’re in:

  • The kitchen
  • The dining room
  • When you’re shopping for food, or “anything to do with eating or [where] the sensuous food preparation is happening”
  • Class: According to Turkle, most elite universities are banning phones and laptops in class because “you take better notes by hand.”
  • Your bedroom—“it’s bad for intimacy.”
  • Your children’s bedroom—“it’s bad for sleeping.”
  • The car, as driver—“you’ll kill yourself.”
  • The car, as passenger, unless you’re on a “50-hour road trip,” in which case maybe some solo movie time can be negotiated. But keep in mind, in-car chatter is “the conversations children remember for the rest of their lives.”
  • The playground
  • Your children’s swim meets and ball games—“you’ve already wasted your Saturday, put the phone down.”
  • When you’re picking your child up for school—it’s “heartbreaking” if you’re looking at your phone when your child wants your attention.

Strict? Yes. This is the price you pay for empathy. This “is not an anti-tech position,” she said. “It’s a pro-conversation position.”

With that, she was interrupted by someone’s iPhone alarm going off.

Advertising That Exploits Our Deepest Insecurities
June 28th, 2017, 02:45 AM

The function of advertising, wrote Robert E. Lane in The Loss of Happiness in Market Democracies, “is to increase people’s dissatisfaction with any current state of affairs, to create wants, and to exploit the dissatisfactions of the present. Advertising must use dissatisfaction to achieve its purpose.”

The web browser is a dissatisfaction-seeking machine. Every search query we input reflects a desire—to have, to know, to find. Ordinarily, that fact may escape notice. But there are moments when the machine reveals its inhumanity.

Speaking on a panel at the Aspen Ideas Festival, which is cohosted by the Aspen Institute and The Atlantic, Manoush Zomorodi, host of WNYC’s Note to Self, shared a story of a message she received from a listener who’d been following her series on digital privacy. “She was concerned that she might have a drinking problem, and so she went on Google and asked one of those questions, ‘How do you know if you have a drinking problem?’ Two hours later, she goes on Facebook, and she gets an ad for her local liquor store.

“And she left me a voicemail crying, ’cause she was like, ‘You know, it would be one thing if it were even sending me, like, clinics maybe where I could get help. But the fact that that’s how it was targeting me ...’ She felt so betrayed by Facebook, this company with whom she had a very intimate relationship.”

Only 9 percent of adults in the United States say “they feel they have ‘a lot’ of control over how much information is collected about them and how it is used,” according to the Pew Research Center. For most of us, unless we’ve expended the effort to limit the information we share, a vast network of automated snoops constantly monitors our behavior online, and tries to match ads to the fears and desires implicit in our searches and messages.

“You hear these little betrayals of privacy that actually are extremely powerful on a daily basis,” Zomorodi said.

Zomorodi’s co-panelist, the investigative journalist Julia Angwin, spoke about seeing middle-school students plagued by body-image insecurities. “Online, all they get is ads on how to lose weight,” Angwin said. “It preys on their fears. It’s just awful, right? And that is—I don’t know that it’s necessarily targeted advertising, because actually the entire internet is weight-loss ads, as far as I can tell.”

While Google effectively publicizes its aggregate search data—the annual compendium of the year’s queries is always a draw—the service’s value to advertisers comes from precisely the opposite type of data: the personal, strange, incredibly revealing things that each of us is looking for. In a recent episode of the Freakonomics podcast, Seth Stevens-Davidowitz, who wrote his dissertation on what people reveal in Google searches, spoke about how people expose a version of themselves to the search engine that they rarely present in surveys, or even in conversations with friends. “There are lots of bizarre questions—not just questions but statements—that people make on Google,” said Stephens-Davidowitz. “‘I’m sad’ or ‘I’m drunk’ or ‘I love my girlfriend’s boobs.’ Why are you telling Google that? It feels like a confessional window where people just type statements with no reason[able impression] that Google would be able to help.”

If our ad networks have become our confessors, what sort of penance will they extract? What latent or secret desires will they exploit? What could they prod us to do?

What Do You Tell Your Kids About Online Privacy?
June 27th, 2017, 02:45 AM

The future of privacy in the United States will be shaped by the next generation of citizens and consumers, a rising generation that has never known a pre-Internet world.  

The broadcast journalist Manoush Zomorodi created a segment called The Privacy Paradox on the WYNC show “Note to Self.” Its premise: “ You love the convenience of living online. But you want more control over where your personal information goes.” (The shows dubbed “The 5 Day Plan” are informative. I learned about an additional way that my iPhone was tracking me. And I pay attention to this stuff.)

Zomorodi’s interactions with listeners caused her to think more deeply about the attitudes toward privacy and digital best practices that she ought to pass along as a parent. At a panel Tuesday at the Aspen Ideas Festival, co-hosted by The Aspen Institute and The Atlantic, she expressed chagrin at having chosen Yahoo when creating her child’s first email account––and pride at the child’s subsequent decision to sign up for an account with an overseas email provider that offers strong encryption.

Many parents don’t offer any guidance to their children on digital privacy, if only because their children seem so much tech savvy than they are. But Zomorodi’s reflections got me wondering what parents who do think about these matters tell their kids as they begin to use the Internet, or smart phones, or get their first social media account. As Julia Angwin has observed, “if I don’t do anything to help my children learn to protect themselves, all their data will be swept up into giant databases, and their identity will be forever shaped by that information.”

How do you acculturate your children into the digital world?

If you’re a parent who is willing to share, I’d be eager to hear about your approach in your own words. How old is your child? What rules do you lay down? What guidance do you offer, if any? What do you leave up to your child? What do you think of the way they conceive of personal information, digital privacy, and the trail of data they are creating? How would you rate your level of awareness of what they do in digital spaces? What are your biggest worries, challenges, and dilemmas? Email conor@theatlantic.com if you’re willing to share answers to these questions, or any related thoughts.

I expect many parents will benefit from hearing one another’s experiences.

For Google, Everything Is a Popularity Contest
June 27th, 2017, 02:45 AM

When I saw that Google had introduced a “Classic Papers” section of Google Scholar, its search tool for academic journals, I couldn’t help but stroke my chin professorially. What would make a paper a classic, especially for the search giant? In a blog post introducing the feature, Google software engineer Sean Henderson explains the company’s rationale. While some articles gain temporary attention for a new and surprising finding or discovery, others “have stood the test of time,” as Henderson puts it.

How to measure that longevity? Classic Papers selects papers published in 2006, in a wide range of disciplines, which had earned the most citations as of this year. To become a classic, according to Google, is just to have been the most popular over the decade during which Google itself rose to prominence.

It might seem like an unimportant, pedantic gripe to people outside of academia. But Scholar’s classic papers offers a window into how Google conceives of knowledge—and the effect that theory has on the ideas people find with its services.

* * *

Google’s original mission is to “organize the world’s information and make it universally accessible.” It sounds simple enough, if challenging given the quantity of information world and the number of people that might access it. But that mission obscures certain questions. What counts as information? By what means is it accessible, and on whose terms?

The universals quickly decay into contingencies. Computers are required, for one. Information that lives offline, in libraries or in people’s heads, must be digitized or recorded to become “universally” accessible. Then users must pay for the broadband or mobile data services necessary to access it.

At a lower level, ordinary searches reveal Google’s selectiveness. A query for “Zelda,” for example, yields six pages of links related to The Legend of Zelda series of Nintendo video games. On the seventh page, a reference to Zelda Fitzgerald appears. By the eighth, a pizzeria called Zelda in Chicago gets acknowledgement, along with Zelda’s café in Newport, Rhode Island. Adding a term to the query, like “novelist” or “pizza,” produces different results—as does searching from a physical location in Chicago or Newport. But the company’s default results for simple searches offers a reminder that organization and accessibility mean something very particular for Google.

That hidden truth starts with PageRank, Google’s first and most important product. Named after Google founder Larry Page, it is the method by which Google vanquished almost all its predecessors in web search. It did so by measuring the reputation of web sites, and using that reputation to improve or diminish its likelihood of appearing earlier in search results.

When I started using the web in 1994, there were 2,738 unique hostnames (e.g., TheAtlantic.com) online, according to Internet Live Stats. That’s few enough that it still made sense to catalog the web in a directory, like a phone book. Which is exactly what the big web business founded that year did. It was called Yahoo!

But by the time Page and Sergey Brin started Google in 1998, the web was already very large, comprising over 2.4 million unique hosts. A directory that large made no sense. Text searches had already been commercialized by Excite in 1993, and both Infoseek and AltaVista appeared in 1995, along with Hotbot in 1996. These and other early search engines used a combination of paid placement and text-matching of query terms against the contents of web pages to produce results.

Those factors proved easy to game. If queries match the words and phrases on web pages, operators can just obscure misleading terms in order to rise in the rankings. Page and Brin proposed an addition. Along with analysis of the content of a page, their software would use its status to make it rise or fall in the results. The PageRank algorithm is complex, but the idea behind it is simple: It treats a link to a webpage as a recommendation for that page. The more recommendations a page has, the more important it becomes to Google. And the more important the pages that link to a page are, the more valuable its recommendations become. Eventually, that calculated importance ranks a page higher or lower in search results.

Although numerical at heart, Google made search affective instead. The results just felt right—especially compared to other early search tools. That ability to respond as if it knew what its users were thinking needed laid the foundation for Google’s success. As the media scholar Siva Vaidhyanathan puts it in his book The Googlization of Everything, relevance became akin to value. But that value was always “relative and contingent,” in Vaidhyanathan’s words. That is, the actual relevance of a web page—whether or not it might best solve the problem or provide the information the user initially sought—became subordinated to the sense of initial delight and subsequent trust in Google’s ability to deliver the “right” results. And those results are derived mostly from a series of recurrent popularity contests PageRank runs behind the scenes.

* * *

Google Scholar’s idea of what makes a paper a classic turns out to be a lot like Google’s idea of makes a website relevant. Scholarly papers cite other papers. Like a link, a citation is a recommendation. With enough citations, a paper becomes “classic” by having been cited many times. What else would “classic” mean, to Google?

As it turns out, scholars have long used citation count as a measure of the impact of papers and the scholars who write them. But some saw problems with this metric as a measure of scholarly success. For one, a single, killer paper can skew a scholar’s citation count. For another, it’s relatively easy to game citation counts, either through self-citation or via a cabal of related scholars who systematically cite one another.

In 2005, shortly after Google went public, a University of California physicist named Jorge Hirsch tried to solve some of these problems with a new method. Instead of counting total citations, Hirsch’s index (or h-index, as it’s known) measures a scholar’s impact by finding the largest number of papers (call that number h) that have been cited at least h times. A scholar with an h-index of 12, for example, has 12 papers each of which is cited at least 12 times by other papers. H-index downgrades the impact of a few massively successful papers on a scholar’s professional standing, rewarding consistency and longevity in scholarly output instead. Hirsch’s method also somewhat dampens the effect of self- and group-citation by downplaying raw citation counts.

H-index has become immensely influential in scholarly life, especially in science and engineering. It is not uncommon to hear scholars ask after a researcher’s h-index as a measure of success, or to express pride or anxiety over their own h-indexes. H-index is regularly used to evaluate (and especially to cull) candidates for academic jobs, too. It also has its downsides. It’s hard to compare h-indexes across fields, the measure obscures an individual’s contribution in co-authored papers, and it abstracts scholarly success from its intellectual merit—the actual content of the articles in question.

That makes h-index eminently compatible with life in the Google era. For one, Google Scholar has been a boon to its influence, because it automates the process of counting citations. But for another, Google has helped normalize reference-counting as a general means of measuring relevance and value for information of all kinds, making the process seem less arbitrary and clinical when used by scholars. The geeks brought obsessive numerism to the masses.

Instead of measuring researchers’ success, Google Scholar’s Classic Papers directory defines canon by distance in time. 2006 is about ten years ago—long enough to be hard to remember in full for those who lived through it, but recent enough that Google had found its legs tracking scholarly research (the Scholar service launched in 2004). Classic papers, in other words, are classic to Google more than they are classic to humanity writ large.

In the academy today, scholars maintain professional standing by virtue of the quantity and regulatory of their productivity—thus Hirsch’s sneer at brilliant one-offs. Often, that means scholarly work gets produced not because of social, industrial, or even cosmic need, but because the wheels of academic productivity must appear to turn. Pressing toward novel methods or discoveries is still valued, but it’s hard and risky work. Instead, scholars who respond to a specific, present conditions in the context of their fields tend to perform best when measured on the calendar of performance reviews.

Looking at papers cited the most in 2006, as Google Scholar’s Classic Papers does, mostly reveals how scholars have succeeded at this gambit, whether intentionally or not. For example, the most-cited paper in film is “Narrative complexity in contemporary American television,” by the Middlebury College television studies scholar Jason Mittell. Mittell was one of the first critics to explain the rise of television as high culture, particularly via social-realist serials with complex narratives, like The Sopranos. Mittell’s take was both well-reasoned and well-timed, as shows like Deadwood, Big Love, and The Wire were enjoying their runs when he wrote the paper. That trend has continued uninterrupted for the decade since, making Mittell’s article a popular citation.

Likewise, the most cited 2006 paper in history is “Can history be open source? Wikipedia and the future of the past,” by Roy Rosenzweig. The article offers a history and explanation of Wikipedia, along with an assessment of the website’s quality and accuracy as an historical record (good and bad, it turns out). As with complex TV, the popularity of Rosenzweig’s paper relates largely to the accidents of its origin. Wikipedia was started in 2001, and by 2005 it had begun to exert significant impact on teaching and research. History has a unique relationship to encyclopedic knowledge, giving the field an obvious role in benchmarking the site. Rosenzweig’s paper even discusses the role of Google’s indexing methods in helping to boost Wikipedia’s appearance in search results, and the resulting temptation among students to use Wikipedia as a first source. Just as in Mittell’s case, these circumstances have only amplified in the ten years since the paper’s publication, steadying its influence.

This pattern continues in technical fields. In computer vision, for example, a method of identifying the subject of images is the top cited paper. Image recognition and classification was becoming increasingly important in 2006, and the technique the paper describes, called spatial pyramid matching, remains important as a method for image matching. Once more, Google itself remains an obvious beneficiary of computer vision methods.

To claim that these papers “stand the test of time,” as Henderson does, is suspect. Instead, they show that the most popular scholarship is the kind that happened to find purchase on a current or emerging trend, just at the time that it was becoming a concern for a large group of people in a field, and for whom that interest amplified rather than dissipated. A decade hence, the papers haven’t stood the test of time so much as proved, in retrospect, to have taken the right bet at the right moment—where that moment also corresponds directly with the era of Google’s ascendance and dominance.

* * *

PageRank and Classic Papers reveal Google’s theory of knowledge: What is worth knowing is what best relates to what is already known to be worth knowing. Given a system that construes value by something’s visibility, be it academic paper or web page, the valuable resources are always the ones closest to those that already proved their value.

Google enjoys the benefits of this reasoning as much as anyone. When Google tells people that it has found the most lasting scholarly articles on a subject, for example, the public is likely believe that story because they also believe Google tends to find the right answers.

But on further reflection, a lot of Google searches do not produce satisfactory answers, products, businesses, or ideas. Instead, they tend to point to other venues with high reputations, like Wikipedia and Amazon, with which the public has also developed an unexamined relationship of trust. When the information, products, and resources Google lists don’t provide a solution to the problem the seeker sought, the user has two options. Either continue searching with more and more precise terms and conditions in the hopes of being led to more relevant answers, or shrug and click the links provided, resolved to take what was given. Most choose the latter.

This way of consuming information and ideas has spread everywhere else, too. The goods worth buying are the ones that ship via Amazon Prime. The Facebook posts worth seeing are the ones that show up in the newsfeed. The news worth reading is the stuff that shows up to be tapped on. And as services like Facebook, Twitter, and Instagram incorporate algorithmic methods of sorting information, as Google did for search, all those likes and clicks and searches and hashtags and the rest become votes—recommendations that combine with one another to produce output that’s right by virtue of having been sufficiently right before.

It’s as if Google, the company that promised to organize and make accessible the world’s information, has done the opposite. Almost anything can be posted, published, or sold online today, but most of it cannot be seen. Instead, information remains hidden, penalized for having failed to be sufficiently connected to other, more popular information. But to think differently is so uncommon, the idea of doing so might not even arise—for shoppers and citizens as much as for scholars. All information is universally accessible, but some information is more universally accessible than others.

Is the Problem With Tech Companies That They're Companies?
June 27th, 2017, 02:45 AM

What news do people see? What do they believe to be true about the world around them? What do they do with that information as citizens—as voters?

Facebook, Google, and other giant technology companies have significant control over the answers to those questions. It’s no exaggeration to say that their decisions shape how billions see the world and, in the long run, will contribute to, or detract from, the health of governing institutions around the world.

That’s a hefty responsibility, but one that many tech companies say they want to uphold. For example, in an open letter in February, Facebook’s founder and CEO Mark Zuckerberg wrote that the company’s next focus would be “developing the social infrastructure for community—for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.”

The trouble is not a lack of good intentions on Zuckerberg’s part, but the system he is working within, the Stanford professor Rob Reich argued on Monday at the Aspen Ideas Festival, which is co-hosted by the Aspen Institute and The Atlantic.

Reich said that Zuckerberg’s effort to position Facebook as committed to a civic purpose is “in deep and obvious tension with the for-profit business model of a technology company.” The company’s shareholders are bound to be focused on increasing revenue, which in Facebook’s case comes from user engagement. And, as Reich put it, “it’s not the case that responsible civic engagement will always coincide with maximizing engagement on the platform.”

For example, Facebook’s news feed may elicit more user engagement when the content provokes some sort of emotional response, as is the case with cute babies and conspiracy theories. Cute babies are well and good for democracy, but those conspiracy theories aren’t. Tamping down on them may lead to less user engagement, and Facebook will find that its commitment to civic engagement is at odds with its need to increase profits.

The idea that a company’s sole obligation is to its shareholders comes from a 1970 article in The New York Times Magazine by the economist Milton Friedman calledThe Social Responsibility of Business Is to Increase Its Profits.” In it, Friedman argued that if corporate executives try to pursue any sort of “social responsibility” (and Friedman always put that in quotes), the executive was in a sense betraying the shareholders who had hired him. Instead, he must solely pursue profits, and leave social commitments out of it. Reich says that these ideas have contributed to a libertarian “background ethos” in Silicon Valley, where people believe that “you can have your social responsibility as a philanthropist, and in the meantime make sure you are responding to your shareholders by maximizing profit.”

Reich believes that some sort of oversight is necessary to ensure that big tech companies make decisions that are in the public’s interest, even when it’s at odds with increasing revenue. Relying on CEOs and boards of directors to choose to do good doesn’t cut it, he said: “I think we need to think structurally about how to create a system of checks and balances or an incentive arrangement so that whether you get a good person or a bad person or a good board or a bad board, it’s just much more difficult for any particular company or any particular sector to do a whole bunch of things that threaten nothing less than the integrity of our democratic institutions.”

Reich said that one model for corporations might be creating something like ethics committees that hospitals have. When hospitals run into complicated medical questions, they can refer the question to the ethics committee whose members—doctors, patients, community members, executives, and so onrepresent a variety of interests. That group dives deeply into the question and comes up with a course of action that takes into account various values they prize. It’s a complicated, thoughtful process—“not an algorithm where you spit out the correct moral answer at the end of the day,” Reich said.

EU Hits Google With Record $2.7 Billion Antitrust Fine
June 27th, 2017, 02:45 AM

The European Commission has fined Google a record $2.7 billion for the way it promotes its own shopping service over those of its rivals, and ordered the tech giant to change the way it shows the results or face further fines.

“What Google has done is illegal under EU antitrust rules,” Margrethe Vestager, the European Union’s Competition Commissioner, said in a statement. “It has denied other companies the chance to compete on their merits and to innovate, and most importantly it has denied European consumers the benefits of competition, genuine choice and innovation.”

Google in a statement said it “respectfully disagree[s]” with the ruling and will review it “as we consider an appeal.”

The EC said it was up to Google to decide how it would change its search results related to shopping. But if the company fails to comply, it will be ordered to pay 5 percent of Alphabet’s daily worldwide earnings—an amount equivalent to about $14 million each day. Alphabet is Google’s parent company.  

The ruling is the latest run-in U.S. tech companies have had with the EU’s regulators, who regularly target them for antitrust and tax-related issues.  In August 2016, Vestager demanded that Apple repay $14.5 billion in back taxes, calling the incentives the company received in Ireland “illegal tax benefits.”  Apple CEO Tim Cook called that ruling “maddening.” Vestager is also investigating Amazon’s tax practices in Europe and has fined Facebook over its acquisition of WhatsApp. But it’s Google that has felt the brunt of the rulings: Last year the EC announced it was investigating Google mobile-operating system Android on antitrust charges. It also being scrutinized for its advertising, which the bloc says violates its rules.     

The EC’s moves have prompted criticism that European regulators are deliberately targeting U.S. tech companies. The bloc’s regulators reject the accusation. The companies, too, have denied any wrongdoing.

Who Gets to Use Facebook's Rainbow 'Pride' Reaction?
June 26th, 2017, 02:45 AM

James Berri traveled three hours to Sacramento earlier this month for his first Pride parade, one of hundreds of annual LGBTQ celebrations across America. Berri also talked about the experience on Facebook, reading and reacting to other people’s posts with thumbs-up likes and Facebook’s new rainbow “Pride” emoji. Throughout June, the platform is offering a rainbow flag alongside likes, hearts, and angry faces that people can click on to react to others’ posts and comments. Yet Berri, a 21-year-old transgender artist, is conflicted over the fact that not everyone can use this new rainbow button.

Back in Fresno, Berri wondered how Facebook decides who’s eligible. “Why don’t they have it, too?” he asked, referring to friends sitting with him in a salon in the larger, less-prominent California city. “It makes me confused for my friends.”

One friend disagreed: “Maybe I don’t want my family to actively know that I’m in all of these things because they’re just gonna—they’re not gonna like it.”

As a rare commodity, the Pride reaction has attracted a rainbow hunt among Facebook users. This June, Facebook announced that the feature would be available in “major markets with Pride celebrations” and for people who follow the company’s LGBTQ page. They also announced that the rainbow would “not be available everywhere.” For example, Facebook limits access in countries where LGBTQ rights are politically risky. Yet many Americans, like Berri’s Fresno friends, also missed out.

Is Facebook’s rollout of rainbow flags a case of algorithmic hypocrisy, user protection, or something else? Using their ability to detect people’s location and interests, the company's algorithms are choosing which people get the rainbow flag while hiding it from others. At first glance, this approach looks like it could contribute to the creation of political bubbles, as a feature promoted in progressive cities and less available in the rest of America. If real, these discriminatory political bubbles could constitute a secret kind of “digital gerrymandering,” according to Harvard Law professor Jonathan Zittrain.

Algorithmic political bubbles are hard to detect because they show something different to each person. Only by comparing notes can people map the boundaries of what a platform chooses to show its users. Doing so, when legal, allows independent researchers to detect discrimination and hold platforms accountable for their actions. To find out if Facebook's rainbow Pride reaction was a case of digital gerrymandering, our three-person team—a data scientist, a survey researcher, and an ethnographer of youth social-media practices—conducted an algorithmic audit, asking hundreds of Facebook users in 30 cities to report if they could access the Pride reaction.

Our audit asked two questions. First, are there U.S. cities where everyone is allowed to give a rainbow reaction? Second, do Facebook’s own LGBTQ-interest algorithms predict who has access elsewhere?

By using Facebook’s algorithms, we based our audit on the way that Facebook’s software sees the world. When advertisers publish an ad with Facebook, the company asks them to define the regions, interests, and demographics of the people they want to reach. While the platform’s gender targeting does not allow grouping by LGBTQ identities, their algorithms do infer LGBTQ-interest based on what people like, share, and write about. People can be categorized for their interests in, for instance, “Gay Pride,” “LGBT Culture,” “Pride Parade,” “Rainbow Flag (LGBT),” and “LGBT Social Movements.” Since Facebook allows advertisers to include or exclude people from those categories, we could survey people to discover if LGBTQ-interested people have a different experience on the platform from people that Facebook categorizes as not LGBTQ-interested.

Caption
We asked people from 30 cities in 15 states to report their access to the Pride reaction.

Across 15 states that are home to the largest U.S. cities, we chose a large city per state and paired it with a smaller city elsewhere in the state. Within each city, we used Facebook’s ad targeting to recruit people who the platform’s algorithms think are interested in LGBTQ issues, compared to people who aren’t. We then tested the correlation between LGBTQ interest and access to the Pride reaction.

Among the cities we investigated, an overwhelming percentage of people without LGBTQ interests reported having the pride emoji in New York City, San Francisco, Chicago, Seattle, and Boston. Yet many other cities among the largest 25 in the U.S. were excluded from city-wide access, including Philadelphia, Detroit, Phoenix, and Nashville.

In places without city-wide access, Facebook’s LGBTQ advertising groups correlated strongly with people's ability to use the rainbow reaction. On average, people with LGBTQ interests who responded to our ads were 46 percent more likely than the non-LGBTQ interest group to report having access to the rainbow reaction. It's possible that people in the LGBTQ interest groups received the rainbow because they chose to “like” the LGBTQ@Facebook group, which the company says will unlock the rainbow reaction.

Kristina Boerger, a 52-year-old musician and human-rights organizer from Greencastle, Indiana, was surprised that other people could use the reaction but not her. “It certainly wouldn’t be because Facebook doesn’t know that I am queer,” she said. “That would be one of the first things they know about me.”

Why would Facebook selectively release the pride reaction? When we reached out for comment, a company spokesperson replied with a quote from an early June press release, explaining that Facebook limited access to test the feature, even though 22.7 million people have presumably unlocked the rainbow by liking the LGBTQ page. The platform may also be trying to protect users in parts of the U.S. where they could face harassment. When Facebook encouraged people in 2015 to choose a rainbow profile picture, some administrators of Facebook groups banned any member who made the change.

Limited access to the Pride reaction might also allow Facebook to gain PR benefits from supporting gay rights in some U.S. cities while avoiding scandal elsewhere. Could regional geo-fencing help the company manage public expectations in a polarized political environment? Betsy Willmore, an organizer of PrideFest in Springfield, Illinois, thinks the company is carefully managing its image. “Their intention is not to piss people off,” she said. “And they are legitimizing those that are getting pissed off by it.”

Many Americans could be unaware of Facebook’s public support for LGBTQ rights. After facing election-year pressures, the company might benefit from selective public understanding of its positions. Facebook and its PAC, like many corporations, routinely fund both Democrat and Republican candidates and events. Yet we failed to find a correlation between 2016 presidential election patterns and access to the Pride rainbow. Large cities that supported Trump in 2016 didn’t receive the pride reaction, but neither did many Clinton-supporting cities. If there’s a political pattern to Facebook's decision, we couldn't detect it.

Overall, our audit found that Facebook is doing what it says. The platform has avoided offering city-wide pride reactions in large metropolitan areas that supported Trump in the last election, but LGBTQ-interested people are still able to access the feature on average.

This month, millions of Americans have celebrated Pride with large urban events, in small towns, and across their digital-connected communities. For Berri and his friends in a Fresno salon, the choice to fly a flag online was as consequential as any march. During the conversation, one friend, a queer 19-year-old from Clovis whose name has been omitted to protect them from harassment, decided to “like” Facebook’s LGBTQ page for access to the rainbow reaction. Speaking of disapproving family, they said, “If they’re gonna be pissed off about it, whatever.”

Why Would Anyone Fear a Self-Driving Car?
June 26th, 2017, 02:45 AM

To understand what the world will be like in ten years, it isn’t enough to look back at how different things were a decade ago and presume the differences will be comparable. The pace of technological change is speeding up so quickly, says Astro Teller, who leads the arm of Google that aims at “moonshots,” that one must look back 30 years to experience the same amount of discontinuity we’ll feel ten years hence.

A decade out, he continued, half of all cars on the road will be self-driving (and there would be more but for the fact that today’s cars are too expensive an asset to junk immediately).

The remarks took place Sunday at the Aspen Ideas Festival, which the Aspen Institute co-hosts with The Atlantic. And it prompted a question from moderator Andrew Ross Sorkin.

Trying to imagine a rapid shift toward self-driving cars, Sorkin wondered if the public would be ready as quickly as the technology. “Today there are 35,000 fatalities on the road using cars that we all drive just in the United States,” he said. “What number does that have to go down to that it becomes politically palatable, to the public, that they get in the car, and there may very well be a fatality as the result of a computer?”

In Teller’s view, we’re nearly there already.

“Almost every single person in this room already made that choice, because you got on a plane,” he told the Aspen crowd. “Planes fly roughly 99 percent of the miles that they fly by computer. It's now to the place that it is not safe for humans to fly in a lot of conditions. It's mandated that the computer fly because the computer can do it better.”

He posed this question to skeptics:

If you could have a robotic surgeon that makes one mistake in 10,000, or a human that made one mistake in 1,000, are you really going to go under the knife with the human? Really? We are already at that stage. I think self-driving cars are not in some weird other bucket. We make this decision all the time.

I suspect he is right, if only because more than half of young people already say in surveys that they look forward to self-driving cars, and the ubiquity of ride-sharing services with human drivers is already conditioning car passengers to give over more control. As a counterpoint, however, there are lots of Americans who choose to drive rather than fly, fearing the latter more despite knowing that it is statistically much safer.

With that in mind, I pose the question to readers who shudder at the thought of getting in a self-driving car, even after they are well tested and statistically safer than a car piloted by a human. Are you able to articulate what it is about the self-driving car that scares you? I fear sharks, despite the long odds against one biting me, because they are prehistoric sea monsters who rise up to unexpectedly bite people with razor sharp teeth. Dying by a combination of being eaten alive and drowning seems unusually scary. Why is getting in a self-driving car scarier than getting in a taxi?

The entire opening session of the Aspen Ideas Festival is below, with the Astro Teller interview starting at the 36-minute mark:

RIP Gchat
June 26th, 2017, 02:45 AM

Let’s first acknowledge that Gchat was never officially called Gchat. Launched in February 2006, Google named it Google Talk, refusing to refer to it by its colloquial name. For anyone mourning its demise, which the company announced in a March blog post, those names sound awkward, like they’re describing something else. To me, and to many other users, it’s Gchat, and always will be.

The brilliance of Gchat was that it allowed you to instant message any Gmail user within a web browser, instead of using a separate application. This attribute was a lifeline for those of us who, a decade ago, were online all day at our entry-level jobs in open offices, every move tracked on computers that required admin access to download new software, with supervisors who could appear behind you at any time. You could open a separate browser window or a single tab, keeping Gchat running in the background as you ostensibly worked on projects aside from the dramas of your personal life.

Before Gchat, IMing was cloaked in anonymity. On AIM, I dialed up as “thalia587”—inspired by the Greek muse of comedy—after finishing my homework every night in high school. I shed that identity in college, when I’d log onto iChat on my blue iMac as “beulahtengo,” a mash-up of Beulah and Yo La Tengo, two of my favorite bands at the time. My friends knew it was me, but if I’d been a more rebellious youngster, I could have used those handles to IM anyone anonymously.

On Gchat, I was myself. When my invitation from Gmail—which at that point was still invitation-only—arrived right before my college graduation, I jumped on a username that was a variation of my real name, something I could print on a resume.

My college friends all did the same. When we scattered across continents after graduation, just a few months after abandoning Friendster for a new site called Facebook that, as far as we could tell, was most useful for determining who on campus was In A Relationship, Gmail and later, Gchat, helped us stay in touch, filling in the gaps between LiveJournal entries.

* * *

Gchat became another sort of lifeline during my time as a stay-at-home parent. I no longer had an employer standing over my shoulder or restricting what I downloaded. But some of my friends still used Gchat. So once my son whittled his naps down to one a day, guaranteeing a solid chunk of time for me to turn off Raffi and seek adult conversation, I’d crack open my MacBook and launch Gmail, around the time my friends were eating leftovers at their desks, their idle yellow status icons turning green again.

In the middle of my days of unpaid labor, Gchat was my remaining connection to the world of paid work. While I scanned the latest tweets in my feed, I kept a tab open to run Gchat in the background, ready in case someone wanted to talk during the one time of the day I was free.

* * *

Other people used Gchat for its ability to talk “off the record” without saving a transcript of the messages exchanged. As a digital packrat who saves folders of downloads and screen shots in case I need them someday, I never opted to do so. I wasn’t trading secrets or conducting an illicit affair. On the contrary—I loved being able to, for the first time, preserve transcripts of chats with my quick-witted friends, that, short of hiring a stenographer to follow me around, I’d never be fast enough to record in real life.

Only after a decade of trying to capture the ephemeral did I realize my mistake. Now, whenever I use Gmail’s search feature, essential for a service that urges you to keep everything while making it tedious to organize anything, driftwood from some years-old chat floats to the surface. Searching for, say, “Sleater-Kinney” in an effort to retrieve purchased concert tickets bubbles up ancient conversations with a variety of people with whom I’ve discussed the band over the years, only some of whom I’m still friends with.

Reading email exchanges from past relationships that soured is awkward enough. But it’s the old Gchats, conducted in close to real time, that transport me to the past, revealing thoughts I don’t remember having in conversations with people I no longer speak to, people who at the time I could never imagine not knowing. There they are, in stark black sans-serif: my overabundant exclamation points, my unsuccessful attempts at sarcasm, my bad jokes, or worse, responding “lol” to misogynistic ones. All preserved in digital amber, like the insect from Jurassic Park. And just like in the movie, when the past is within such close reach, I can’t leave it alone.

I understand why Google abandoned what it calls Talk. Like Google Reader, the now-defunct RSS feed aggregator that was the first Google product I mourned, Gchat’s limited features are a relic from a simpler time. When Gchat launched, you were either online or offline, with your status indicating your availability. The cultural tide has shifted in the opposite direction—now we’re always on, all messages are instant, and people have embraced the impermanence of digital scraps that briefly remain “on the record” before disappearing forever—think Snapchat, and Instagram Stories.

Unlike with Reader, which Google killed outright, the company has in mind a replacement for Gchat—Google Hangouts, which was stealthily integrated into Gmail in 2013. The company says Hangouts offers “advanced improvements” to Gchat’s “simple chat experience,” and that the vast majority of users who’ve switched over report few differences in functionality. Any tweaks are minor, like the discontinuation of idle and busy status icons in favor of “Last Seen” indicators and a mute feature.

I don’t know that I need Gmail to offer group video calling, photo messages or location sharing. I miss the time when green, yellow, and red bubbles of availability sufficed. We’re already flush with ways to convey the intricate mundanity of our lives, though each new one requires someone younger and younger to explain it to me. Inevitably, something new will change the game again. As an individual user, caught up in the whims of corporations competing for eyeballs and profit, it’s best not to get too attached to any one particular method of communicating.

Hangouts ushers Gchat into the mobile era, allowing asynchronous communication between two or more Gmail users, none of whom need to be sitting behind a computer to send a message. I downloaded the Hangouts app on my phone, but as I examine it in the lineup of other options, its relevance to my own life seems questionable. Rather than using it to contact the people I’m used to Gchatting, I imagine I’ll reach them with another app we’re both already on.

* * *

When I first signed up for Gmail, the mobile world as it exists today was unimaginable. I’d just upgraded from a 30-minute-a-month phone plan, reserved for emergencies, to my first two-year contract. T-Mobile shipped a small box to my first apartment with a shiny black flip phone wrapped in clear plastic. Texting was difficult and expensive. Each one cost about a dime, and if you wanted to type a C, you had to hit the “2” button three times. My mom had a similar phone; until she got the hang of it, she’d type my aunt Marcia’s name as “Mapaga.” The nickname stuck, even as the technology improved.

As new phones and data plans made texting easier and cheaper, and smartphones popularized multimedia messages, like videos, GIFs and emojis, our phones became our go-to sources for instant connection. Now I can send a minute-long video of my son’s first haircut to a group message of out-of-state family members, or show my friends a screenshot of an acquaintance I just saw on TV, and receive an instant response, before the show’s credits roll.

This impulse to share is what Google is trying to leverage through Hangouts, but with a corporate-friendly spin. “We’ve been working hard to streamline the classic Hangouts product for enterprise users,” reads the blog post announcing Gchat's demise. Another post on a different Google blog goes further, highlighting the company’s efforts to “[double] down on our enterprise focus for Hangouts and our commitment to building communication tools focused on the way teams work.” Clearly, people using Gmail for work, not just during work, are increasingly critical as Google competes with Microsoft and Slack for corporate users.

* * *

After Google announced the future of its messaging tools, I could only think about the past. When Google Reader vanished, the accompanying data disappeared forever, so I worried that the formal end of Gchat might mean the loss of those conversations. I searched Gmail’s help section for steps to download an archive of my chats, which number in the thousands, but there’s no easy way to do so. My pulse quickened at the thought of losing all those transcripts I hadn’t read in years, but that I might someday want to read again.

Like the one in which I coached my younger sister, who now has a masters degree and just bought a house with her fiancée, on her college application essay. “I suck at the ‘how did you first learn of Smith College’ question,” she’d lamented. “I was lurking colleges in Princeton Review … and I saw that Smith had ‘dorms like palaces’?”

Or the wistful ones from a friend in the throes of new motherhood, including one in which she contemplated a long car drive with her infant. “What’s the worst that can happen? She cries for three hours? That just sounds like…yesterday.”

Even ones that make me cringe, like one in which a guy who knew I pined for him told me “Serious Talk is a Poor Idea right now” because he was drunk on cheap wine and watching Predator 2 on a Saturday afternoon. “I mean,” he’d typed, “this movie has Bill Paxton in it.”

As with most 21st-century dilemmas requiring an immediate solution, I consulted—what else?—Google search. I discovered a step-by-step method to export all archived chats that looked legit. I followed the instructions and a file started downloading to my desktop with the extension .mbox, something the Mail application could read.

Once complete, I scrolled through the new Mail folder, relieved to see my fleeting correspondence from the previous decade. But as I looked closer, it became clear that the file had only imported the last line of each one of the thousands of chat threads in my Gmail history. Most of them were simple salutations or responses to something unknown—ttyl, haha, brb, lol, you too—stripped of all context through this technological hiccup. But some friends had a habit of never formally ending Gchat conversations, so scrolling through some lines revealed more about what we’d been discussing when one of us had signed off.

            al qaeda clearly has the wrong target

            did you bring the hobo gloves?

            not really wastednot really wasted

            plus i have to find some meat to eat

            she wants help with her Ikea bookshelf

            but Im Mom Terrible, which is much better than regular terrible

            life is continually amusing

Fortunately, my paranoia was unwarranted. Google’s communications team assured me the company will archive all on-the-record chats, even those predating Hangouts. I’m relieved I can still peek at that time in my life to see how much has changed in a decade, but it’s unsettling to realize that ultimately, it’s not up to me. To keep enjoying the perks of any communication platform, some control over the content must be ceded. Not a comfortable thought, this powerlessness, but technology unspools in one direction only, offering no way to rewind.  

The Mars Robot Making Decisions on Its Own
June 23rd, 2017, 02:45 AM

In 2012, the Curiosity rover began its slow trek across the surface of Mars, listening for commands from Earth about where to go, what to photograph, which rocks to inspect. Then last year, something interesting happened: Curiosity started making decisions on its own.

In May last year, engineers back at NASA installed artificial-intelligence software on the rover’s main flight computer that allowed it to recognize inspection-worthy features on the Martian surface and correct the aim of its rock-zapping lasers. The humans behind the Curiosity mission are still calling the shots in most of the rover’s activities. But the software allows the rover to actively contribute to scientific observations without much human input, making the leap from automation to autonomy.

In other words, the software—just about 20,000 lines of code out of the 3.8 million that make Curiosity tick—has turned a car-sized, six-wheeled, nuclear-powered robot into a field scientist.

And it’s good, too. The software, known as Autonomous Exploration for Gathering Increased Science, or AEGIS, selected inspection-worthy rocks and soil targets with 93 percent accuracy between last May and this April, according to a study from its developers published this week in the journal Science Robotics.

AEGIS works with an instrument on Curiosity called the ChemCam, short for chemistry and camera. The ChemCam, a one-eyed, brick-shaped device that sits atop the rover’s spindly robotic neck, emits laser beams at rocks and soil as far as 23 feet away. It then uses the light coming from the impacts to analyze and detect the geochemical composition of the vaporized material. Before AEGIS, when Curiosity arrived at a new spot, ready to explore, it fired the laser at whatever rock or soil fell into the field of view of its navigation cameras. This method certainly collected new data, but it wasn’t the most discerning way of doing it.

With AEGIS, Curiosity can search and pick targets in a much more sophisticated fashion. AEGIS is guided by a computer program that developers, using images of the Martian surface, taught to recognize the kind of rock and soil features that mission scientists want to study. AEGIS examines the images and finds targets that resemble set parameters, ranking them by how closely they match what the scientists asked for. (It’s not perfect; AEGIS can sometimes include a rock’s shadow as part of the object.)

Here’s how Curiosity’s cameras see the Martian landscape with AEGIS. The targets outlined in blue were rejected, the red are potential candidates. The best targets are filled in with green, and the second-best with orange:

NASA / JPL-Caltech

When AEGIS settles on a preferred target, ChemCam zaps it.

AEGIS also helps ChemCam with its aim. Let’s say operators back on Earth want the instrument to target a specific geological feature they saw in a particular image. And let’s say that feature is a narrow mineral vein carved into bedrock. If the operators’ commands are off by a pixel or two, ChemCam could miss it. They may not get a second chance to try if Curiosity’s schedule calls for it so start driving again. AEGIS corrects ChemCam’s aim in human-requested observations and its own search.

These autonomous activities have allowed Curiosity to do science when Earth isn’t in the loop, says Raymond Francis, the lead system engineer for AEGIS at NASA’s Jet Propulsion Laboratory in California. Before AEGIS, scientists and engineers would examine images from Curiosity, determine further observations, and then send instructions back to Mars. But while Curiosity is capable of transmitting large amounts of data back to Earth, it can only do so under certain conditions. The rover can only directly transmit data to Earth for a few hours of day because it saps power. It can also transmit data to orbiters circling Mars, which will then kick it over to Earth, but the spacecraft only have eyes on the rover for about two-thirds of the Martian day.

“If you drive the rover into a new place, often that happens in the middle of the day, and then you’ve got several hours of daylight after that when you could make scientific measurements. But no one on Earth has seen the images, no one on Earth knows where the rover is yet,” Francis says. “We can make measurements right after the drives and send them to Earth, so when the team comes in the next day, sometimes they already have geochemical measurements of the place the rover’s in.”

Francis said there was at first some hesitation on the science side of the mission when AEGIS was installed. “I think there’s some people who imagine that the reason we’re doing this is so that we can give scientists a view of Mars, and so we shouldn’t be letting computers make these decisions, that the wisdom of the human being is what matters here,” he said. But “AEGIS is running during periods when humans can’t do this job at all.”

AEGIS is like cruise control for rovers, Francis said. “Micromanaging the speed of a car to the closest kilometer an hour is something that a computer does really well, but choosing where to drive, that’s something you leave to the human,” he said.

There were some safety concerns in designing AEGIS. Each pulse from ChemCam’s laser delivers more than 1 million watts of power. What if the software somehow directed ChemCam to zap the rover itself? To protect against that disastrous scenario, AEGIS engineers made sure the software was capable of recognizing the exact position of the rover during its observations.  “When I give talks about this, I say we have a rule that says, don’t shoot the rover,” Francis says. AEGIS is also programmed to keep ChemCam’s eye from pointing at the sun, which could damage the instrument.

In many ways, it’s not surprising that humanity has a fairly autonomous robot roaming another planet, zapping away at rocks like a nerdy Wall-E. Robots complete far more impressive tasks on Earth. But Curiosity is operating in an environment no human can control. “In a factory, you can program a robot to move in a very exact way over to a place where it picks up a part and then moves again in a very exact way and places it onto a new car that’s being built,” Francis says. “You can be assured that it will work every time. But when you’re in a space exploration context, literally every time AEGIS runs it’s in a place no one has ever seen before. You don’t always know what you’re going to find.”

Francis says the NASA’s next Mars rover, scheduled to launch in 2020, will leave Earth with AEGIS already installed. Future robotic missions to the surfaces or even oceans of other worlds will need it, too. The farther humans send spacecraft, the longer it will take to communicate with them. The rovers and submarines of the future will spend hours out of Earth’s reach, waiting for instructions.

Why not give them something to do?

How Wheelchair Accessibility Ramped Up
June 23rd, 2017, 02:45 AM

Stephanie Woodward just wanted to meet her friends for a drink. It was a bar she’d never visited, and she was excited. But going anywhere new for Woodward requires a vetting process. She uses a wheelchair, so building access is always a worry. Research on Google Street View proved promising in this case: A ramp led up into the entryway. That evening, Woodward entered the front door without trouble. But once inside, a single step stood between her and the bar.

It was one step, but for Woodward it may as well have been a wall. “I’m in the front lobby, but to get any sort of service, to even be seen, I had to call the staff,” she says. “I can’t visit this business independently. I’m a strong wheelchair user, but hopping steps is not an easy task.”

Thanks to decades of disability activism culminating in the passage of the Americans with Disability Act (ADA) in 1990, the ramp has become both a tool for accessibility and opportunity for architectural innovation. In the modern built environment, the ramp services people bound to wheelchairs or strollers—making those bodies newly visible in the process. Yet, despite their apparent success, ramps remain contested sites for equal access.

* * *

The ramp is believed to have moved the materials that built the Egyptian pyramids and Stonehenge. The ancient Greeks constructed a paved ramp known as the Diolkos to drag ships across the Isthmus of Corinth. In 1600, Galileo hailed the inclined plane as one of the six simple machines in his work Le Meccaniche.

The ramp’s ability to move objects shouldn’t overshadow its astounding ability to move people. The ramp was retooled as a highly effective “people mover” 300 years after Galileo, in the design of New York’s Grand Central Terminal. The Vanderbilt family, who operated the rail lines the terminal would service, promised New Yorkers an innovative train hub to accommodate newly electrified tracks. They hired the Minnesota-based architecture firm Reed & Stem to get the job done. “Its innovative scheme featured pedestrian ramps inside, and a ramp-like roadway outside that wrapped around the building to connect the northern and southern halves of Park Avenue,” explains the New York Transit Museum.

As design moved forward, engineers built mock-ups at various slopes and, according to the New-York Tribune, studied “the gait and gasping limit of lean men … fat men … women with babies… and all other types of travelers” to determine the ideal grade. It wasn’t a pointless exercise: When the terminal opened in 1913, it was billed as the first great “stairless” station, in the words of Grand Central historian Sam Roberts. The flow of passengers with luggage, strollers and wheelchairs was swift; the “Red Cap” attendees could move their wheeled carriers with ease. The system remains one of the most celebrated in American transit terminals; modern travelers move as easily up and down the ramps, just with less fanfare.

One frequent passenger to Grand Central Terminal was Presidential Franklin D. Roosevelt, who utilized a “secret platform” and elevator to ascend from the lower-level tracks directly up to the Presidential Suite at the Waldorf-Astoria Hotel. At the time, he was hiding his disability and wheelchair from the American public; Grand Central’s ramps were of no use to him. “The first president with a disability was a great advocate for the rehabilitation of people with disabilities,” explains the Anti-Defamation League. “But [he] still operated under the notion that a disability was an abnormal, shameful condition, and should be medically cured or fixed.”

* * *

This sentiment began to change in the 1940s and 1950s. Many World War II veterans returned home with mobility-related injuries. There was little accommodation for wheelchair users at the time, particularly within public spaces. According to a study by the historian Julie Peterson, disabled veterans attending the University of Illinois often hitched rides on service trucks to avoid sidewalks without accessible ramps.

Returning vets planted the seeds for the disability-rights movement, and activism grew alongside the other social movements of the 1960s. Protesters took to the streets, smashing curbs to create their own accessible ramps. In the 1970s, founders of the Independent Living Movement in Berkeley, California, established a wheelchair route through the University of California campus and its vicinity. According to Peterson, they even rolled their own curb ramps, “covertly laying asphalt in the middle of the night.”

Disability activists lobbied Congress and marched on Washington to include their rights in a major affirmative-action bill that would prohibit employment discrimination by the federal government. The so-called Rehabilitation Act was passed in 1973, and for the first time in history, the civil rights of people with disabilities were protected by law. In ensuing years, activists sought to consolidate various pieces of legislation into a single civil-rights statute, much like the 1964 Civil Rights Acts had done for race. But it wasn’t until 1990 that the government passed the Americans with Disability Act, making way for the contemporary, ramped environment. While the law protected the civil rights of disabled Americans, it also required businesses provide accommodations to people with disabilities, and ensured public spaces would receive modifications to become wheelchair accessible.

Architectural, design, and planning practices had to adapt after the ADA. It wasn’t—and still isn’t—an easy shift. Annie Boivin, a designer (and wheelchair user) with the architecture firm Perkins+Will, tells me that the Swiss architect Le Corbusier is partly to blame. In the early 20th century, Le Corbusier created the fictitious character Le Modulor—an able-bodied man, of average height and dimension, around whom Le Corbusier believed standardized design should revolve. Whole cities were designed by the able-bodied men on which Le Modulor was modeled. It was a period with no distinction between what are now known as the two models of disability: medical and social. The medical model views disabled bodies as impaired, the social model points out the environment was never built for them in the first place.

ADA standardization has attempted to remedy the situation. Architects rely on tools like elevators, lifts, and automatic doors. The ramp, the most visible architectural element of the post-ADA period, is also the most important to wheelchair users. Woodward compares the ramp to a “dependable boyfriend who will never leave us.”

Reliable though it might be, the ADA is hardly a cure-all. All buildings constructed or renovated after the law’s passage must follow standards for accessible design, but many older structures still have relics of inaccessibility—like the single-step entrance that kept Woodward from entering the bar. Disability activists I spoke with say that it’s common for building owners to ignore ADA requirements, and pressing them to follow the rules can be difficult. Just this year, a bill was introduced in Congress that would make it harder to sue building owners who fail to remove so-called “architectural barriers.”

* * *

The problem of the single-step entryway inspired the design researcher Sara Hendren to build her own ramp, called Slope Intercept. It can nest, stack, and move on casters. Hendren bemoans an enduring “compliance culture” within architecture, in which ramps are tacked onto buildings with little imagination. Mia Ives-Rublee, who led the disability caucus for the Washington, D.C., Women’s March, says that ramps are often hard to find, placed in the back of buildings, difficult to navigate, or lead to locked doors. She adds that searching for ramps and finding them along the back of buildings “makes you feel like a second-class citizen.”

Those persistent difficulties impact the visibility of disabled bodies in public spaces. “It can become so tiring,” Ives-Rublee tells me. “A lot of people with disabilities won’t go to new places.”

Hendren, eager to show the creative potential that stemmed from one of Galileo’s simple machines, partnered with the dancer Alice Sheppard to design a ramp Sheppard could use onstage with her wheelchair. Sheppard came to the table no stranger to the failure of ramp design. “Why the hell are these eyesores?” she asked. “Compliance-oriented design tends to miss the aesthetic and physical experience of going down a ramp. It should be beautiful, it should participate.”

Attitudes might be starting to change. In 2001, the architect William Leddy was asked to design the Ed Roberts Campus in Berkeley, California. The campus, which opened in 2010, is named after the founder of Berkeley’s Center for Independent Living—the group that installed their own ramps, in the dead of night, throughout the city in the 1970s. It needed to serve as “a symbol of universal design to the community,” according to Leddy. Universal design, he explains, is a design philosophy that strives to create buildings and products usable by all people, to the greatest extent possible, without any need for adaptation.

At the Ed Roberts Campus, the firm designed a helical, bright-red ramp, a dramatic focal point emerging from the middle of the first-floor lobby. At a width of seven feet, there is space for a row of friends or colleagues to traverse it together. Leddy once stumbled upon a wedding ceremony on the ramp, and he vividly recalled a conversation with a wheelchair user who said this was the first building he could move through seamlessly, without asking for any help. The campus, inspired by the design, integrated the ramp into its logo.

Stephanie Woodward is doing her part, too, as Director of Advocacy for New York’s Center for Disability Rights. Upon encountering a non-compliant business—like the bar she couldn’t access—the group writes a letter offering to assist in improving accessibility. If they get no response, they organize protests around the business. “A lawsuit can take seven years to get one ramp in front of a building, one protest could result in a ramp there next week,” she says.

The organization has only started one lawsuit against a non-compliant business, but it’s not how Woodward wants to win her battles. “We shouldn't have to start a lawsuit to have the same access and everyone else,” she says. “We don’t want to sue, we just want to get in.”


This article appears courtesy of Object Lessons.

Deportation Is Going High-Tech Under Trump
June 21st, 2017, 02:45 AM

In a leafy Detroit suburb last March, federal authorities raided a one-story brick house. Their target: Rudy Carcamo-Carranza, a 23-year-old restaurant worker from El Salvador with two deportation orders, a DUI, and a hit-and-run.

The incident would have seemed like a standard deportation case, except for a key detail unearthed by The Detroit News: The feds didn’t find Carcamo-Carranza through traditional detective work. They found him using a cell-site simulator, a powerful surveillance device developed for the global war on terror.

Five days after his election, Donald Trump announced his plan to quickly deport up to 3 million undocumented immigrants—“people that are criminal,” “gang members,” “drug dealers.” How would he do it? How would he deport more people, more quickly, than any of his recent predecessors? The Carcamo-Carranza case suggests an answer: After 9/11, America spent untold sums to build tools to find enemy soldiers and terrorists. Those tools are now being used to find immigrants. And it won’t just be “bad hombres.”

There’s a lot to Trump’s tactics that are very old. Trump seeks to ban Muslim immigrants, spy on mosques, and subject Muslims to extreme border interrogations. In the Chinese exclusion of the late 19th and early 20th centuries, the U.S. government banned most Chinese immigrants, sent investigators to spy on their businesses, and subjected them to extreme border interrogations. In 2017, Trump allies defend the Muslim ban by saying it’s not a Muslim ban, but a geographic ban on people from certain “areas of the world.” In 1917, Congress banned Indian immigrants not by name, but by drawing a box around the region and calling it an “Asiatic Barred Zone.”

Still, there are key aspects to immigration enforcement under Trump that are frighteningly new, albeit some time in the making.

In 2000, when George W. Bush was elected, drones, face recognition, mobile fingerprint scanners, and cell-site simulators—which mimic cellphone towers to intercept phone data—were novel or non-existent. Under the Immigration and Naturalization Service and its successor, Immigration and Customs Enforcement, or ICE, immigration enforcement was a low-tech affair, mostly known for large worksite raids.

Under Barack Obama, ICE went high-tech. At the heart of that shift were biometrics: precise, digitized measurements of immigrants’ bodies. Obama ramped up a Bush-era program, Secure Communities, which sent booking fingerprints from local jails to the Department of Homeland Security, shunting hundreds of thousands of undocumented and legal immigrants, many arrested for minor offenses, into federal deportations.

Previously, federal use of biometrics in the field had focused on Iraq and Afghanistan; with a fingerprint or iris scan, soldiers could tell militants from civilians. In his final years, Obama hit the brakes on Secure Communities—but mobile biometrics trickled down anyways. ICE agents began to stop people in the street to scan their fingerprints. Authorities requested face-recognition searches of Vermont driver’s license photos, looking for visa overstays. Customs and Border Protection sought proposals for face-recognition enhanced drones that, mid-flight, would scan and identify people’s faces.

For all of these technical advances, however, Obama never unleashed his full surveillance powers on immigration enforcement inside the U.S.; most of Obama’s removals took place at the border. Under his Priority Enforcement Program, actions inside the country were primarily targeted against people with criminal records.

Donald Trump brings two fundamental changes. The first is animus. When Trump calls Mexican immigrants drug traffickers and rapists, when he says a judge cannot do his job because of his Mexican heritage, when he implies that Muslim immigrants are party to a vast, Islamist conspiracy (we have to “figure out what’s going on”), it could send a signal to rank and file immigration enforcement.

Second, Trump is starting to use his surveillance arsenal to its utmost legal and technical capacity—within the U.S. Shortly after Carcamo-Carranza’s arrest using a cell-site simulator, a DHS spokesperson clarified that the new “border” drones would not be limited to the border. Instead, the drones would be used wherever there is a “mission need,” a wink at DHS’s claim that the Border Patrol can conduct searches up to 100 miles from the actual border. Simon Sandoval-Moshenberg, a prominent immigration attorney in Virginia, reports that since Trump’s inauguration, every one of his clients arrested by ICE has had their fingerprints scanned before being taken into custody.

Trump’s aggressive use of surveillance is not just about devices. It’s about data. On his fifth day in office, the president issued an executive order on immigration enforcement inside the U.S. Many focused on the fact that he was restoring Secure Communities, the fingerprint-sharing program of the Bush and Obama eras. Fewer noticed the short section, a few lines down, that revoked Privacy Act protections for non-citizens, making it easier for many federal agencies to share with ICE troves of data on legal and undocumented immigrants.

In the era of late-20th-century surveillance—beginning, loosely, with the final years of J. Edgar Hoover and ending with 9/11—there were limits, informal and formal, that focused America’s most powerful surveillance techniques on investigations of the most serious offenses. They were far from perfect, but they were real. The first was cost: It was expensive to “tail” people and track their movements. The second was legal. In 1968, Congress passed the Wiretap Act, which had at its core a simple idea: Wiretaps should be used to catch serious criminals, not petty offenders. You can’t wiretap a jaywalker; you can wiretap a bank robber.

Modern surveillance tools bypass these restraints. They bring the cost of surveillance down to a fraction of the original expense. They outpace federal lawmakers. State legislatures have passed dozens of laws restricting geolocation tracking, cell-site simulators, drones, and other technologies; Congress has passed zero such laws for criminal law enforcement, let alone ICE.

Most people caught in this dragnet will not be like Rudy Carcamo-Carranza. There are not, and never have been, 3 million undocumented criminals. Like his predecessors, most of the people Trump deports will be like Maribel Trujillo Diaz, Arino Massie, or Mario Hernandez-Delacruz: People innocent of any crime. And as Wade Henderson, a dean of the civil rights community, warned, Trump will have, at his disposal, “the greatest surveillance apparatus that this nation, and arguably the world, has ever known.”

In the public eye, Trump’s policies on health care, climate change, and foreign affairs have eclipsed his agenda on immigration. Perhaps people think it only affects immigrants. This is a mistake: Surveillance of immigrants has long paved the way for surveillance of everyone.

Biometrics are no exception. For years, the State Department let the FBI use face recognition to compare suspected criminals’ faces to those of visa applicants. In 2015, State and the FBI announced a pilot program to run these searches against the faces of Americans in passport photos. For years, Congress pressed DHS to use biometrics to track foreign nationals leaving the country. This year, DHS launched face scans through Delta and JetBlue—and both systems scan the faces of foreign nationals and citizens alike.

Fixing Uber Will Require More Than Ousting Its Leader
June 21st, 2017, 02:45 AM

Few were surprised this morning to learn of the resignation of Travis Kalanick from being the CEO of Uber. The company has endured scandal after scandal, many of which trace back to Kalanick in one way or the other, whether directly as a result of his behavior or his business choices, or less directly as a result of the allegedly toxic and discriminatory culture he helped to create as Uber’s founder. It was easy to see why Kalanick had to go. By removing him, investors and the board are undoubtedly hoping to curtail the onslaught of negative attention and return the company to grow and raise money in peace. But at this point, rebuilding and rebranding Uber will take more than pushing out its leader.

For Uber’s investors and directors, a leadership change is a way of showing that Uber is serious about taking a new direction, and protecting the company’s reported $70 billion valuation in the process. “Uber’s clearly in a situation where small changes, simple policy adjustments, those sorts of things, weren’t going to satisfy the investor community, the customer base, and the employee base,” says Brian Kropp, the head of the human-resources practice at Gartner, a research and consulting firm.

But though Uber’s troubles tended to trace back to Kalanick in some way, they also went beyond him: Last week, at the same meeting that it was announced that Kalanick would be going on a (then-temporary) leave, a different board member made a sexist comment that resulted in his own resignation soon after. “Uber has demonstrated that its problem is not only about a single figure—a reputational cancer that could have been cut away—but that the cancer has infected the rest of the body,” says Audra Diers-Lawson, a professor of public relations strategy  at Leeds Beckett University. “Because the bad behaviors have extended beyond just the CEO, a new negative expectation is probably being formed and this is fundamentally damaging to the company.”

According to Diers-Lawson, Kalanick’s ouster was absolutely the right decision for the company, but it would have been better if it had done so when problems were nascent. “In 2015, the company had the opportunity to genuinely mitigate the damage of his influence on the corporate culture and the company’s reputation as the first wave of this crisis hit the public eye,” she said.  

Uber certainly isn’t the only, or first, tech startup with the problem of a young, brash CEO who creates a unique and disruptive product, but cannot seem to make the leap to successful management. A New York Times article from April dubbed this phenomenon the “bro CEO,” citing examples such as Quirky, a gadget-pedaling platform that raised $185 million before being undone by the questionable behavior of its 20-something CEO and founder, and an HR startup called Zenefits, which was once valued at $4.5 billion but ousted its young male CEO amid both criticisms about the company’s frat-like culture and allegations that the company had engaged in cheating on licensing courses. While the company still exists, it is severely diminished and only a fraction of its former size.

But though CEO problems are somewhat common, Uber is a special case. The company, though it’s never actually turned a profit, is flush with investor cash and wildly popular. But beyond that, the timing of Uber’s drama hits right when the public and investors are more engaged than ever in a conversation about the role of corporate culture in the health of a company and the economy more widely. According to Kropp, in 2010, fewer than 40 percent of company earnings calls took the time to discuss issues such as talent or corporate culture; now that figure has climbed to more than 60 percent. That’s because more people today believe that culture is a critical factor in whether a company can attract the right employees and turn a profit.

Now that Kalanick’s gone, there are still some significant structural challenges for the company to overcome. First, there’s the question of what happens to the upper echelons of Uber’s management. The company has long been without a chief operating officer, a vacancy that many experts, and the Holder report, have suggested desperately needs to be filled. As Quartz reported, some tech startups have filled this role with someone who is all the things that the CEO is not. Facebook’s hiring of Sheryl Sandberg in 2008, for example, is widely seen as a brilliant and effective hire that complemented the company’s CEO Mark Zuckerberg, with Sandberg’s experience and corporate diplomacy tempering Zuckerberg’s relative inexperience and sometimes tough management style. But hiring for the COO role, particularly one who will work as a part of a management team, might be difficult without a CEO.

Replacing any CEO, especially one who is pushed out amidst controversy, is a significant task. Whoever Uber hires or promotes to fill the role could drastically alter operations, or continue to proliferate the same problems. Kropp says that replacing a founder-CEO is often an especially tricky task. In cases where the company is doing well and the CEO is well-loved, it makes sense to promote internally, someone who could potentially continue the current path. For established companies with CEO problems, it can serve to change tacks completely, bringing in an outsider. But for a startup such as Uber, a fairly young company with a CEO who left under very public and difficult circumstances, neither might be quite right. An internal hire may be seen as having accepted and contributed to the existing problems. An outsider may have a difficult time acclimating and understanding which factors make Uber special and unique, and are worth retaining. An outsider may also want to make their mark by completely changing the brand, and that can create corporate and cultural destruction in a different way. The sweet spot, Kropp says, would be someone who has worked at the company before, but then left and was successful elsewhere. And that’s not easy to come by.

In order to create real and lasting change, Uber will need to spend money, Kropp says, not just try to implement one-time changes. “A lot of companies try to talk themselves out of these sorts of cultural challenges. They’ll write memos, send notes, make presentations, saying things need to change. But at the end of the day, if you’re not spending money to try to change the problem, they likelihood that you’re actually able to change the culture is incredibly low.” Forcing the CEO out is certainly a bold step toward change, but Kropp says that alone won’t be enough. Instead, salvaging Uber will require constant investment and training for initiatives that will constantly reinforce the company’s new values, accepted behaviors, and expectations. They’ll need to hire people who align with the new values, and create new roles, such as the one Frances Frei, the new senior vice president of leadership and strategy, inhabits. They’ll also need to expand their budgets to help the people in those new roles build teams and implement big changes that can influence the culture. And they’ll have to implement ongoing methods of measuring progress and sussing out new problems. Without those continuing efforts, eventually muscle memory will kick in and everyone will go back to their same old behavior, new CEO or not.

When AI Can Transcribe Everything
June 21st, 2017, 02:45 AM

What is the best way to describe Rupert Murdoch having a foam pie thrown at his face? This wasn’t much of a problem for the world’s press, who were content to run articles depicting the incident during the media mogul’s testimony at a 2011 parliamentary committee hearing as everything from high drama to low comedy. It was another matter for the hearing’s official transcriptionist. Typically, a transcriptionist’s job only involves typing out the words as they were actually said. After the pie attack—either by choice or hemmed in by the conventions of house style—the transcriptionist decided to go the simplest route by marking it as an “[interruption].”  

Across professional fields, a whole multitude of conversations—meetings, interviews, and conference calls—need to be transcribed and recorded for future reference. This can be a daily, onerous task, but for those willing to pay, the job can be outsourced to a professional transcription service. The service, in turn, will employ staff to transcribe audio files remotely or, as in my own couple of months in the profession, attend meetings to type out what is said in real time.

Despite the recent emergence of browser-based transcription aids, transcription’s an area of drudgery in the modern Western economy where machines can’t quite squeeze human beings out of the equation. That is until last year, when Microsoft built one that could.

Automatic speech recognition, or ASR, is an area that has gripped the firm’s chief speech scientist, Xuedong Huang, since he entered a doctoral program at Scotland’s Edinburgh University. “I’d just left China,” he says, remembering the difficulty he had in using his undergraduate knowledge of the American English to parse the Scottish brogue of his lecturers. “I wished every lecturer and every professor, when they talked in the classroom, could have subtitles.”

In order to reach that kind of real-time service, Huang and his team would first have to create a program capable of retrospective transcription. Advances in artificial intelligence allowed them to employ a technique called deep learning, wherein a program is trained to recognize patterns from vast amounts of data. Huang and his colleagues used their software to transcribe the NIST 2000 CTS test set, a bundle of recorded conversations that’s served as the benchmark for speech recognition work for more than 20 years. The error rates of professional transcriptionists in reproducing two different portions of the test are 5.9 and 11.3 percent. The system built by the team at Microsoft edged past both.

“It wasn’t a real-time system,” acknowledges Huang. “It was very much like we wanted to see, with all the horsepower we have, what is the limit. But the real-time system is not that far off.”

Indeed, the promise of ASR programs capable of accurately transcribing interviews or meetings as they happen no longer seems so outlandish. At Microsoft’s Build conference last month, the company’s vice-president, Harry Shum, demonstrated a PowerPoint transcription service that would allow the spoken words of the presentation to be tied to individual slides. The firm is also in a close race with the likes of Apple and Google to perfect the transcripts produced by its real-time mobile translation app.

Huang believes the point at which transcription software will overtake human capabilities is open to interpretation. “The definition of a perfect result would be controversial,” he says, citing the error rates among human transcriptionists. “How ‘perfect’ this is depends on the scenario and the application.”

An ASR system tasked with transcribing speech in real time is only deemed successful if every word is interpreted correctly, something that largely has been achieved with mobile assistants like Cortana and Siri, but has yet to be mastered in real-time translation apps.  However, a growing number of computer scientists are realizing that standards do not need to be as high when it comes to the automatic transcription of recorded audio, where any mistakes in the text can be amended after the fact.

Two companies—Trint, a start-up in London, and Baidu, the Chinese internet giant with an application called SwiftScribe—have begun to offer browser-based tools that can convert recordings of up to an hour into text with a word-error rate of 5 percent or less.* On the page, their output looks very similar to the raw documents I typed out in real-time during the many meetings I attended as a freelance transcriptionist: at best, a Joycean stream-of-consciousness marvel, and at worst, gobbledygook. But by turning the user from a scribe into an editor, both programs can shave hours off an onerous and distracting task.

The amount of time saved, of course, is contingent on the quality of the audio. Trint and SwiftScribe tend to make short work of face-to-face interviews with the bare minimum of ambient noise, but struggle to transcribe recordings of crowded rooms, telephone interviews with bad reception, or anyone who speaks with an accent that isn’t American or British English. My attempt to run a recording of a German-accented speaker through Trint, for example, saw the engine interpret “it was rather cold, but the atmosphere was great” as “That heart is also all barf. Yes. His first face.”

“We don’t claim that this turnaround in a couple of minutes of an interview like this is perfect,” says Jeff Kofman, Trint’s CEO. “But, with good audio, it can be close to perfect. You can search it, you can hear it, you [can] find the errors, and you know within seconds what was actually said.”

According to Kofman, most of the people using Trint are journalists, followed by academics doing qualitative research and clients in business and healthcare—in other words, professions expected to transcribe a large volume of audio on tight deadlines. That’s in keeping with the anonymized data on user behavior being collected by the developer Ryan Prenger and his colleagues at SwiftScribe. While there is a long tail of users who Prenger speculates are simply AI enthusiasts eager to test out SwiftScribe’s capabilities, he’s also spotted several “power users” that are running audio through the program on almost a daily basis. It’s left him optimistic about the range of people the tool could attract as ASR technology continues to improve.

“That’s the thing with transcription technology in general,” says Prenger. “Once the accuracy gets above a certain bar, everyone will probably start doing their transcriptions that way, at least for the first several rounds.” He predicts that, ultimately, automated transcription tools will increase both the supply of and the demand for transcripts. “There could be a virtuous circle where more people expect more of their audio that they produce to be transcribed, because it’s now cheaper and easier to get things transcribed quickly. And so, it becomes the standard to transcribe everything.”

It’s a future that Trint is consciously maneuvering itself to exploit. The company just raised $3.1 million in seed money to fund its next round of expansion. Kofman and his team plan to demonstrate its capabilities later this month at the Global Editors Network in Vienna. Their aim is to have the transcription of the event’s keynote address up on the Washington Post’s website within the hour.

It’s difficult to predict precisely what this new order could look like, although casualties are expected. The stenographer would likely join the ranks of the costermonger and the iceman in the list of forgotten professions. Journalists could spend more time reporting and writing, aided by a plethora of assistive writing tools, while detectives could analyze the contradictions in suspect testimony earlier. Captioning on YouTube videos could be standard, while radio shows and podcasts could become accessible to the hard of hearing on a mass scale. Calls to acquaintances, friends, and old flames could be archived and searched in the same way that social-media messages and emails are, or intercepted and hoarded by law-enforcement agencies.

For Huang, transcription is just one of a whole range of changes ASR is set to provide that will fundamentally change society itself, one that can already be glimpsed in voice assistants like Cortana, Siri, and Amazon’s Alexa. “The next wave, clearly, is beyond the devices that you have to touch,” he says, envisioning computing technology discreetly woven into a range of working environments. “UI technology that can free people from being tethered to the device will be in the front and center.”

For the moment, however, the engineers behind automated transcribers will have to content themselves with more germane users: the journalist sweating a deadline, or the transcriptionist working out the right way to describe a man being pied in a parliamentary select committee.


* This article originally stated that SwiftScribe is a subsidiary of Baidu. We regret the error.

Apple Is a Step Closer to Making Its Own TV Shows
June 21st, 2017, 02:45 AM

This year, Netflix will spend something in the realm of $6 billon on original programming, more than any media company apart from ESPN. Amazon is expected to spend $4.5 billion. Even Google, the owners of YouTube, are looking to spend hundreds of millions making TV shows this year. Streaming TV is no longer a fad—it’s a booming industry, one that’s competitive with cable and network television, and supremely attractive to artists who want to make their work with the least interference possible. Now, just as things have gotten crowded, another tech giant is looking to muscle in to the original-TV content world: Apple.

Though Apple, of course, has plenty of money to throw at scripted programming, it’s always seemed cautious about committing to the kind of onslaught that Netflix, Amazon, Hulu and others have engaged in over the past few years. Netflix is now basically offering an entire new season of a television series every week, on top of its original films and slew of comedy specials. Amazon, which provides shows like Transparent to all of its Prime subscribers, has a more democratic process in which it posts pilot episodes online and invites subscribers to watch and review them before ordering them to series.

It’s still unclear what Apple’s strategy is going to be—but the company has hired two of the biggest names in television production to oversee new positions in video programming. Jamie Erlicht and Zack Van Amburg, the longtime presidents of Sony Pictures Television, are joining Apple this summer to begin work on something “exciting,” according to a statement from Apple’s senior vice president Eddy Cue. “There is much more to come,” he teased, providing no other information on their new responsibilities.

It’s pretty easy to guess what comes next. Sony Pictures Television is one of the most respected production companies in the industry, one that’s worked in all genres and mediums. Among the eclectic shows stewarded by Erlicht and Van Amburg since they took the Sony helm in 2005 are Damages, Breaking Bad, Better Call Saul, Drop Dead Diva, Community, Justified, Happy Endings, Hannibal, Masters of Sex, and Underground.

Even before then, they were part of an initial movement toward offering challenging series on basic cable. They worked at Sony (below executive Steve Mosko) when it sold the shows The Shield and Rescue Me to FX, two of the earliest basic-cable programs to attract attention from critics and Emmy voters. That expanded the “Golden Age” of TV beyond premium-channel offerings like The Sopranos and Sex and the City, eventually spurring the rise of streaming networks. In general, the pair have a proven record of teaming up with interesting creators and shepherding projects with the kind of individual touch that stands out—exactly what is needed in the packed world of Peak TV.

The streaming boom is, first and foremost, auteur-driven: Netflix, Amazon, and Hulu attract well-known creators by offering them more artistic freedom than the world of network television. Shows like House of Cards, Transparent, and Orange Is the New Black are sold as distinctive items: not for everyone, of course, but appealing enough to draw in new subscribers eager to watch one particular show. Critical acclaim is only so important; Netflix CEO Reed Hastings long bragged about how the much derided Hemlock Grove attracted more subscribers than House of Cards, at first.

Erlicht and Van Amburg will now be tasked with defining Apple’s new original-TV brand. A statement from the pair said that Apple was looking to bring in programming of “unparalleled quality,” which of course doesn’t mean much; their hiring does seem to indicate that Apple will try to function as more of a traditional TV studio. Some rumors had indicated the company wanted to buy another production company, like Ron Howard and Brian Grazer’s Imagine Entertainment, outright, but instead Erlicht and Van Amburg will build something from the ground up.

Other questions remain: How will Apple present its new shows? Will you need an Apple TV device to watch them? Will the company introduce a subscription service mimicking Netflix, Amazon, and Hulu, and if so, will it buy up the rights to various existing shows and movies to fill out its library? It could also go the route of networks like CBS, offering new shows like The Good Fight and Star Trek Discovery for a smaller monthly fee, or try something else entirely. Other details will come to light soon, but for now, Apple’s big hires suggest Peak TV’s rapid expansion won’t slow down anytime soon.

What Will Uber Become Without Travis Kalanick as CEO?
June 21st, 2017, 02:45 AM

It came down to money, in the end. Investors backing Uber decided it wasn’t enough that Travis Kalanick announced last week he would take an indefinite leave from his position at the helm of the scandal-plagued company.

He had to go. Now.

This was an “outright rebellion” by shareholders, says Mike Isaac, The New York Times reporter who first reported Kalanick’s surprise ouster overnight. On one hand, it all seemed to have happened rather quickly: Investors delivered a letter to Kalanick while he was on business in Chicago on Tuesday, insisting he step down. Kalanick then spoke with investors and at least one Uber board member, the Times reported, and agreed to resign. (Uber didn’t immediately respond to The Atlantic’s request for comment early Wednesday.)

Viewed another way, Kalanick’s departure was a long, long time coming. Uber has been beset by scandals for most of the year, including a boycott campaign from users, explosive allegations of sexual harassment by a former Uber engineer, a leaked video showing Kalanick arguing with an Uber driver, a federal lawsuit alleging Uber stole a competitors’s design secrets—and those aren’t even all of the big ones. More than once, one unfavorable story about Uber was still prominently in the news when the next PR nightmare materialized.

To onlookers without any stake in the company, Uber’s troubles have been so pronounced as to seem, at times, darkly funny. (“Getting Out Ahead Of This One: Uber Has Apologized In Advance If Anyone Finds Out About Something Called ‘Project Judas,’” said a joke-headline from the satirical website Clickhole, a sister site to The Onion.) In recent weeks, so many of Uber’s senior leaders had either resigned or been fired that, as one mock-suggested on Twitter, a company focused on self-driving cars had become driverless itself. Susan Fowler, the engineer who wrote the explosive blog post about Uber’s toxic culture in February, joked about the possibility of a Hollywood adaptation of the mess: “I would just like to say, just for the record, that I would like to be played by Jennifer Lawrence.”

But the serious questions always came back to Kalanick. It began to seem there was no breaking point. How long could one man remain in charge of a company that seemed to be so badly flailing? And, crucially, what was the public-relations fire-swamp doing to Uber’s $70 billion valuation?

Kalanick’s ouster—and the paradox of how it seems both sudden and drawn out—is a reflection of the forces that rule Silicon Valley. Namely, money, money, and more money. (“Cash flows before bros,” as the tech news site Pando put it last week.)

It was ultimately concerns over the bottom line—not merely the toxic culture, or Kalanick’s trademark hubris, or explosive allegations of sexual harassment, or revelations about Uber’s secret software to evade of law enforcement—that forced Kalanick out. Well, out of his job as CEO, that is. He’ll still be on Uber’s board of directors, and he will retain his control of a majority of Uber’s voting shares.

Which means that, even without Kalanick at the helm, Uber is still the Uber Kalanick built—barring other changes that the company has promised to make. In the meantime, you can be sure Uber employees are watching to see who will succeed their old boss, and what that hire might reveal about the seriousness with which Uber takes its employees’ complaints and its commitment to improving diversity. That remains an open question: The results of Uber’s recent internal investigation yielded superficial and outright bizarre attempts to change the company’s culture—renaming the “War Room” the “Peace Room,” for example, and a request for everyone who attended a company meeting to hug. (Seriously.)

All this calls to mind the old business joke about a CEO who attends a conference on the importance of corporate culture, then barks at the head of HR, “get me one of those things.” The difficulty of shedding a company’s culture—even after shaking up top leadership—was on full display last week. Shortly after Uber published a spate of initiatives it said would help make the company move past its hostile reputation, leaked audio emerged of a board member making a sexist remark at a meeting intended to help with a smooth transition during Kalanick’s then-leave. (Within hours, that board member had resigned.)

Now that Kalanick’s indefinite leave has become definite, Uber finds itself at a crossroads. An Uber without its founding CEO is an Uber untethered to the principles that the company has associated with its rapid growth since it launched in 2009, for better and perhaps for worse. Uber has recently tried to distance itself from some of what it long described as core competencies—qualities like “super pumpedness,” “always be hustling,” and “toe-stepping.” It even announced this week it will allow tips to drivers in a longstanding reversal of a controversial policy.

Neither Kalanick’s departure nor small hints at changes to come are guarantees that Uber’s troubles are over. One of the biggest tests ahead is Uber’s legal battle with Waymo, the driverless-car company that spun out from Google, which claims Uber stole its design secrets.

Eventually, it was investors who answered the question of whether Uber could thrive with Kalanick as CEO. They decided it could not. Next, they will find out if the company can survive without him.

Uber's CEO Is Out
June 21st, 2017, 02:45 AM

Uber CEO Travis Kalanick has resigned reportedly following a shareholder revolt, capping a tumultuous few months of PR disasters of its own making.

“I love Uber more than anything in the world and at this difficult moment in my personal life I have accepted the investors request to step aside so that Uber can go back to building rather than be distracted with another fight,” Kalanick said in a statement, cited by The New York Times and others. Bloomberg said he’d remain on the company’s board.

Last week Kalanick said he would take an indefinite leave of absence from the company to both work on himself amid a series of controversies as well as to mourn his late mother.

Here’s more from the Times on his resignation:

Mr. Kalanick’s exit [Tuesday] came under pressure after hours of drama involving Uber’s investors, according to two people with knowledge of the situation, who asked to remain anonymous because the details were confidential.

Earlier on Tuesday, five of Uber’s major investors demanded that the chief executive resign immediately. The investors included one of Uber’s biggest shareholders, the venture capital firm Benchmark, which has one of its partners, Bill Gurley, on Uber’s board. The investors made their demand for Mr. Kalanick to step down in a letter delivered to the chief executive while he was in Chicago, said the people with knowledge of the situation.

Tuesday’s move by the controversial CEO is the culmination of months of controversy that began when Kalanick agreed last December to serve on President Trump’s advisory council. But in February, following the president’s executive order on immigration—and public criticism of how Uber reacted to protests against the order—Kalanick resigned from the group.

Controversy followed: There were allegations of a culture of widespread sexism at Uber; a federal lawsuit by Waymo, Google’s self-driving car company, accused the company of stealing its designs, leading ultimately to Uber’s firing of Anthony Levandowski, the central figure in the allegations; and the Department of Justice opened an investigation into a software Uber used to sidestep authorities.

Amid this Kalanick’s own PR troubled mounted: He was filmed berating an Uber driver; it emerged he directed his engineers to camouflage the Uber app so Apple’s engineers wouldn’t see it, allowing the app to secretly track iPhones even after it was deleted; and at least one high-profile departure from the company said “the beliefs and approach to leadership that have guided my career are inconsistent with what I saw and experienced at Uber.”

Ultimately the very attributes that made Kalanick and Uber a darling of Silicon Valley’s investors brought about his downfall. The company has been valued at about $70 billion, and investors feared that any initial-public offering would be imperiled by Uber’s temperamental CEO. As the Times noted:

Taking a start-up chief executive to task so publicly is relatively unusual in Silicon Valley, where investors often praise entrepreneurs and their aggressiveness, especially if their companies are growing fast. It is only when those start-ups are in a precarious position or are declining that shareholders move to protect their investment.

In the case of Uber — one of the most highly valued private companies in the world — investors could lose billions of dollars if the company were to be marked down in valuation.

The result: Kalanick’s resignation.

Did Climate Change Ground Flights in Phoenix?
June 20th, 2017, 02:45 AM

Weather always makes good news, but the role of climate change in altering weather, especially extreme weather, has made the subject a lightning rod for unease.

A case in point this week: A heat wave is triggering record temperatures in the Southwest. American Airlines reported having canceled up to 50 flights at Phoenix’s Sky Harbor airport, where the temperature has neared 120 degrees in recent days.

Flight cancellations are a perfect foundation for climate-change panic. Commercial air travel is an aspect of ordinary life that touches everyone: Travelers can’t help but worry that their mobility will be impacted by near- and long-term effects of climate change. Much of the coverage tracking the American Airlines cancellations pegs climate change as a direct or indirect cause of the disruption.

That account isn’t wrong. But it doesn’t tell the full story, either.

When I asked, American Airlines cited a 118 degree “maximum operating temperature” for the flights in question, and confirmed that “the heat has impacted some of our regional flights.” But airplanes don’t exactly have such neat and tidy maximum temperatures. Temperature limits might affect avionic systems—the electronics that run communication, navigation, and so forth—but temperatures interact with airplane performance more than they allow or prohibit flight itself. Density altitude, which can change in part based on temperature, affects aerodynamic performance of specific aircraft, but that performance also interacts with other factors, including weight.

“Aircraft engine performance is a function of many things including air temperature,” Glenn Lightsey, an aerospace engineer and colleague of mine at Georgia Tech said. “Hotter days require longer runways and more gradual ascent paths to lift the same weight.” Flight is complex, and it cannot be boiled down to a single number.

The specific aircraft matters, too. American Airlines canceled flights using Canadair Regional Jet (CRJ) equipment. These are the business jets that cover routes between hubs and smaller markets. Larger passenger jets are rated to tolerate higher temperatures, well above those currently being experienced in the American Southwest—after all, planes also fly from Dubai, Riyadh, and Cairo.

The CRJ’s history might play a role in its airworthiness under extreme heat. CRJs are currently made by Bombardier, a multinational transportation manufacturer. Bombardier bought the CRJ line from Canadair, a Canadian state aerospace company. These jets were originally designed for business use, and only later developed to serve the commercial regional jet market. They were not necessarily intended for use in all conditions and markets, nor to be packed full of passengers like they are today. (Bombardier did not immediately respond to a request for comment.)

That circumstance is a consequence of deregulation and consolidation in the American airline market. When regulation demanded that airlines serve all markets, larger jets serviced smaller airports. But as those requirements lifted, and as more airlines merged, even once-thriving hubs like Cincinnati, St. Louis, and Memphis have become minor markets. Airlines began relying on equipment like the CRJ, because they can transport a smaller number of people at a lower cost. Were the affected flights on Boeing large jets instead, there would be no question about their ability to fly.

Speaking of cost, it’s not clear if American itself has issued the CRJ-based flight cancellations, or if they came from the regional partners that actually operate those flights under American-Airlines livery. The business relationships between major carriers and their regional partners are complex. Some are wholly owned subsidiaries, while others—including Mesa and SkyWest, which serve Phoenix on behalf of American—are contracted carriers.

Regional carriers tend to endure financial pressures from their major-carrier partners, some of which might make the effects of high-temperature operation a financial or operational burden. For example, it’s possible that the planes could fly safely above a certain ground temperature, but that the performance data to facilitate that flight is not already available or easily determined. Airlines have to buy the performance charts used to operate flights, and they might determine that it is not worth purchasing them for unlikely or uncommon scenarios.

American Airlines didn’t comment when I asked who had made the determination to cancel flights, or if available performance data had any impact on the decision. At least one other airline, Delta, also canceled a flight operated by SkyWest on CRJ metal scheduled at the peak of Tuesday’s heat, although it isn’t clear if temperature played a role in that decision, or which airline made the call to cancel it.

Grounding flights due to heat in Phoenix clearly is a matter that interacts with climate change. But it’s not solely explained by climate change. Industrial history, public policy, market economics, and other factors exert pressure on the situation, too.

And that applies to more than flight. Climate change is a wicked problem because it interacts with so many other aspects of the lived and built environment. It does the subject a disservice to pretend that it can be summarized by the reading on a thermometer.

Beyond the Five Senses
June 20th, 2017, 02:45 AM

The world we experience is not the real world. It’s a mental construction, filtered through our physical senses. Which raises the question: How would our world change if we had new and different senses? Could they expand our universe?

Technology has long been used to help people who have lost, or were born without, one of the five primary senses. More recently, researchers in the emerging field of “sensory enhancement” have begun developing tools to give people additional senses—ones that imitate those of other animals, or that add capabilities nature never imagined. Here’s how such devices could work, and how they might change what it means to be human.

1 | Hearing Pictures

For decades, some deaf people have worn cochlear implants, which use electrode arrays to stimulate the auditory nerve inside the ear. Researchers are working on other technologies that could restore sight or touch to those who lack it. For the blind, cameras could trigger electrodes on the retina, on the optic nerve, or in the brain. For the paralyzed or people with prosthetic limbs, pressure pads on real or robotic hands could send touch feedback to the brain or to nerves in the arm.

Autistic people might even gain a stronger social sense. Last year, MIT researchers revealed the EQ-Radio, a device that bounces signals off people to detect their heart rate and breathing patterns. A yet-to-be-invented device might infer a target’s mood from those data and convey it to an autistic user—or anyone who wants to improve their emotional intuition.

We can also substitute one sense for another. The brain is surprisingly adept at taking advantage of any pertinent information it receives, and can be trained to, for instance, “hear” images or “feel” sound. For the blind, a device called the BrainPort V100 connects a camera on a pair of glasses to a grid of electrodes on a person’s tongue. At first the effect just feels like tiny bubbles, but eventually users can learn to read stronger points of stimulation as bright pixels and weaker points as dark ones, and can form a mental picture.

Somewhat similarly, a Dutch device called the vOICe (“Oh I see!”) uses a camera to create a soundscape that the vision-impaired wearer hears through headphones. To the uninitiated it sounds like bursts of static, but with training, people can discern images. Every second or so, the sound pans from left to right, using frequency to indicate an object’s height (the taller the object, the higher the pitch) and volume to indicate its brightness.

For the deaf, David Eagleman, a neuroscientist at Stanford University, has developed a vest that turns sound into a pattern of vibrations on the torso. With practice, people can learn to use it to interpret speech and other sounds.

Hulton Archive / Getty; The Hearing Aid Museum; Peter Meijer / seeingwithsound.com

2 | Borrowing From Nature

Scientists are also exploring ways to add senses found elsewhere in the animal kingdom. For instance, a handheld device called the Bottlenose, built by amateur biohackers, uses ultrasound to detect the distance of objects, then vibrates the user’s finger at different frequencies, giving him or her echolocation. Other devices provide the navigational sense of migratory birds: A company called feelSpace sells the naviBelt, a belt that points you in your desired direction by vibrating on your waist. Another company, Cyborg Nest, sells the North Sense, a device you can attach to your chest that vibrates when pointing north.

In the future, cochlear implants could be tuned to pick up really low frequencies, such as those used by elephants, or really high ones, such as those used by dolphins. Bionic eyes could be built to allow humans to see ultraviolet rays (as butterflies, reindeer, dogs, and other animals can) and infrared light (as certain snakes, fish, and mosquitoes can).

Some researchers think we may eventually install a port in our brains that would allow us to swap in different sensors when we need them. “Maybe there’s a Swiss Army Knife of sensors that you carry with you,” says Rajesh P. N. Rao, the director of the National Science Foundation’s Center for Sensorimotor Neural Engineering. You might rely on a distance sensor when climbing a mountain, then plug in night vision after dark.

3 | Sensing Moonquakes

We might also gain senses that no other animal has. The vibrating vest Eagleman created can be programmed to receive any input, not just sound. He says it could be used to monitor the stock market, or sentiment on Twitter, or the pitch and yaw of a drone, or one’s own vital signs. You could of course display these things on a computer screen, but our brains can’t attend to lots of visual details at once, Eagleman says. The body, on the other hand, is used to monitoring dozens of muscles just to keep us balanced, so would be more adept at handling multidimensional inputs.

A cortical implant could also theoretically take in just about any type of information, which the brain could process as a new sense. “You can do whatever you want,” says Neil Harbisson, a “cyborg artist” who’s originally from Spain. “You can design a unique sense that is related to your interests or to your curiosity.”

Harbisson was born seeing in gray scale. In 2004, he had an antenna surgically attached to his skull. The antenna has a camera at the end and vibrates at different frequencies, turning colors into sound. (He can also use the antenna to take phone calls and listen to music.) He plans to implant a band around his head with a warm spot that orbits every 24 hours, giving him a temporal organ. His friend and collaborator Moon Ribas has a wireless chip in her arm that vibrates when earthquakes occur anywhere in the world, giving her a seismic sense. She hopes to put vibrating implants in her feet that convey moonquakes.

But Bernd Fritzsch, a neuroscientist at the University of Iowa, cautions that for every patch of neural real estate we dedicate to interpreting a new sense, we leave fewer neurons for processing the others. So with each sense we add, we’re also taking something away.

4 | Literal Groupthink

Perhaps we’ll even achieve that so-called sixth sense: ESP. Kevin Warwick, an engineer at Coventry University, in the U.K., wirelessly connected an electrode in his arm to one in his wife’s arm, so that wherever they were, they could feel when the other flexed a hand. Eagleman wants to take that idea one step further and wirelessly connect heart and sweat monitors on his wife and himself so they can sense each other’s moods.

Research by Rao shows that people can send yes/no messages telepathically: An EEG senses brain activity in the sender and another device applies magnetic pulses to the brain of the receiver. Eventually, we might have brain implants connected wirelessly. “This kind of communication might get over some of the limitations of language,” Rao says. It could help people share sensations or express thoughts that are hard to put into words, and enhance collaboration. “I think that will completely change how we are as humans,” Warwick says. “Telepathy is the future.” Indeed, Elon Musk recently started a company called Neuralink focused on connecting brains to computers; he says it could someday enable computer-mediated telepathy.

Exactly how all this tinkering will change us remains to be seen. Harbisson says that gaining animals’ senses “would allow us to connect with nature and to other species in a more profound way.” But if shared senses connect us to other species, might sensation inequality pull people apart by creating new categories of haves and have-nots? We already struggle to agree on what’s real and what’s fake; that problem seems likely to get worse as technology creates new means of perception. “Society is stretched like an elastic band,” Warwick says. Radical sensory enhancement for some could stretch it even more. “The question is, does the elastic band break?”

What an AI's Non-Human Language Actually Looks Like
June 20th, 2017, 02:45 AM

Something unexpected happened recently at the Facebook Artificial Intelligence Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.

In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) When I wrote about all this last week, lots of people reacted with some degree of trepidatious wonder. Machines making up their own language is really cool, sure, but isn’t it actually terrifying?

And also: What does this language actually look like? Here’s an example of one of the bot negotiations that Facebook observed:

Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

Not only does this appear to be nonsense, but the bots don’t really seem to be getting anywhere in the negotiation. Alice isn’t budging from her original position, anyway. The weird thing is, Facebook’s data shows that conversations like this sometimes still led to successful negotiations between the bots in the end, a spokesperson from the AI lab told me. (In other cases, researchers adjusted their model and the bots would develop bad strategies for negotiating—even if their conversation remained interpretable by human standards.)

One way to think about all this is to consider cryptophasia, the name for the phenomenon when twins make up their own secret language, understandable only to them. Perhaps you recall the 2011 YouTube video of two exuberant toddlers chattering back and forth in what sounds like a lively, if inscrutable, dialogue.

There’s some debate over whether this sort of twin speak is actually language or merely a joyful, babbling imitation of  language. The YouTube babies are socializing, but probably not saying anything with specific meaning, many linguists say.

In the case of Facebook’s bots, however, there seems to be something more language-like occurring, Facebook’s researchers say. Other AI researchers, too, say they’ve observed machines that can develop their own languages, including languages with a coherent structure, and defined vocabulary and syntax—though not always actual meaningful, by human standards.

In one preprint paper added earlier this year to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract language—and how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots don’t need to have corporeal form to engage in non-verbal communication; they just engage with what’s called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapes—in other words, the researchers write, they witnessed the “automatic emergence of grounded language and communication ... no human supervision!”

The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.

But let’s take a step back for a minute. Is what any of these bots are doing really language? “We have to start by admitting that it’s not up to linguists to decide how the word ‘language’ can be used, though linguists certainly have opinions and arguments about the nature of human languages, and the boundaries of that natural class,” said Mark Liberman, a professor of linguistics at the University of Pennsylvania.

So the question of whether Facebook’s bots really made up their own language depends on what we mean when we say “language.” For example, linguists tend to agree that sign languages and vernacular languages really are “capital-L languages,” as Liberman puts it—and not mere approximations of actual language, whatever that is. They also tend to agree that “body language” and computer languages like Python and JavaScript aren’t really languages, even though we call them that.

So here’s the question Liberman poses instead: Could Facebook’s bot language—Facebotlish, he calls it—signal a new and lasting kind of language?

“Probably not, though there’s not enough information available to tell,” he said. “In the first place, it’s entirely text-based, while human languages are all basically spoken or gestured, with text being an artificial overlay.”

The larger point, he says, is that Facebook’s bots are not anywhere near intelligent in the way we think about human intelligence. (That’s part of the reason the term AI can be so misleading.)

“The ‘expert systems’ style of AI programs of the 1970s are at best a historical curiosity now, like the clockwork automata of the 17th century,” Liberman said. “We can be pretty sure that in a few decades, today’s machine-learning AI will seem equally quaint.”

It’s already easy to set up artificial worlds populated by mysterious algorithmic entities with communications procedures that “evolve through a combination of random drift, social convergence, and optimizing selection,” Liberman said. “Just as it’s easy to build a clockwork figurine that plays the clavier.”

When Does Amazon Become a Monopoly?
June 19th, 2017, 02:45 AM

On Friday morning, Amazon announced it was buying Whole Foods Market for more than $13 billion. About an hour later, Amazon’s stock had risen by about 3 percent, adding $14 billion to its value.

Amazon basically bought the country’s sixth-largest grocery store for free.

As the financial reporter Ben Walsh pointed out on Twitter, this is the opposite of what’s supposed to happen—normally, the acquiring company’s share price falls after a major purchase—and it suggests that investors now believe something odd is going on with Amazon. What could it be?

From a straightforward standpoint, the Whole Foods acquisition means that Amazon will now participate in the $700 billion grocery-store business. Jeff Bezos, the company’s president and CEO, has made grabs at that market for several years—launching Amazon Fresh, a food home-delivery service, and opening several Amazon-branded bodegas in Seattle. Now he owns one of the industry’s best-known brand names.

But Amazon paid a premium to buy Whole Foods, so its new full entry into another industry doesn’t quite explain the rise. Instead, the boost in share price suggests something more ominous: An incredible amount of economic power is now concentrated in Amazon, Inc., and investors now believe it is stifling competition in the retail sector and the broader American economy.

As the country’s biggest online retailer of cleaning supplies and home goods, Amazon competes with Walmart, Target, and Bed, Bath & Beyond. As a clothing and shoe retailer, it competes with DSW, Foot Locker, and Gap. As a distributor of music, books, and television, it competes with Apple, Netflix, and HBO. In the past decade, Amazon has also purchased the web’s biggest independent online shoe store, its biggest independent online diaper store, and its biggest independent online comics store.

And it is successful on nearly all of those fronts. Last year, Amazon sold six times as much online as Walmart, Target, Best Buy, Nordstrom, Home Depot, Macy’s, Kohl’s, and Costco did combined. Amazon also generated 30 percent of all U.S. retail sales growth, online or offline.

Yet Amazon’s dominance extends far beyond retail. It also lends credit, publishes books, designs clothing, and manufactures hardware. Three years ago, it bought Twitch.com, a central player in the $1-billion business of e-sports. And on top of all this, it operates Amazon Web Services, a $12-billion business that rents servers, bandwidth, and computing power to other companies. Slack, Netflix, Dropbox, Tumblr, Pinterest, and the federal government all use Amazon Web Services.

It is, in short, an Everything Store: not only selling goods but also producing them, not only distributing media from its servers but also renting them out to others. And it’s left many experts and analysts wondering: When does Amazon become a a monopoly?

“I think of Amazon as serving almost as the essential infrastructure for the American economy at this point, when it comes to commerce. And that affords Amazon a lot of power and control,” says Lina Khan, a fellow on the Open Markets team at New America, a center-left think tank.

In January, Khan called for Amazon to receive more antitrust scrutiny in an article in The Yale Law Journal.

Historically, many of Amazon’s critics have focused on their Marketplace feature, which allows small businesses to sell their goods through Amazon’s website. Some merchants have accused Amazon of secretly using Marketplace as a laboratory: After collecting data on which products do best, it introduces low-price competitors available through its flagship service.

The Institute for Local Self-Reliance, a nonpartisan advocacy group, has also criticized Amazon for this alleged anticompetitive behavior. “By controlling this critical infrastructure, Amazon both competes with other companies and sets the terms by which these same rivals can reach the market. Locally owned retailers and independent manufacturers have been among the hardest hit,” said a recent report from the group.

But as Amazon has grown across the economy, concern has grown about its strength and power more broadly. “Amazon introduced itself to consumers as a middle man for books,” Khan told me. “But it expanded into becoming a middle man for all sorts of other things—and, for some time now, it has expanded well beyond that middle-man role. As it distributes more content and produces more goods, it’s running into more and more conflicts of interest.”

In short, people have begun to wonder if Amazon is just too big: a company that already controls too much of online retail and that has started to exert its dominance downward into the rest of the supply chain.

Amazon has historically declined to comment about antitrust issues. It recently began searching for a professional economist to consult on competition law concerns. Before November, one of the loudest critics of its market dominance was Donald Trump, who implied on the campaign trail that Jeff Bezos faces “antitrust problems.”

Trump has not yet appointed someone to chair the Federal Trade Commission. The commission must review the acquisition before its completion.

When the United States began to enforce for fairer competition between businesses in the early 20th century, it focused on two kinds of monopolistic organizations: horizontal monopolies and vertical monopolies. In the steel business, for instance, a horizontal monopoly buys up a lot of steel mills, such that other competitors would be boxed out. A vertical monopoly buys up and down the supply chain—acquiring barges and trains and coal mines—essentially barring other companies from competing with it.

Through the middle of the century, regulators focused on business arrangements that could use their control of markets to inflate prices for consumers—cracking down on cartels and more informal price-fixing or market-controlling arrangements—and also on trusts and firms that exercised monopolistic control over their industries.

Starting in the late 1970s, though, legal scholars began arguing that monopolistic behavior could only be measured if it raised prices for consumers. Regulators and judges took notice and opted to pay less attention to overall market structure. And inspired by the corporate raiding and hostile takeovers of the early 1980s, many experts came to believe that bigness in the market would always fix itself.

That consensus has come under attack in the past decade, partly thanks to companies like Amazon. During its first 10 years in operation, Amazon rarely returned profits, and investors allowed Bezos to continue invest in infrastructure and market share. The end result is today’s Amazon: a behemoth company that returns a meager profit, with a stock worth nearly 200 times as much as it earns.

Khan and others have called for the focus to be less on Amazon’s prices and more on its economic power. “Nobody would quibble that Amazon in its current form today is great for consumers. The question is what do things look like going forward?” she asked.

“Americans love to think about their economy as open and competitive,” she said. “But when a growing share of the economy is contained by Amazon, it’s a form of centralization. Owning your own business used to be a way for Americans to build assets and pass on wealth inter-generationally. But if you look at any sector where Amazon is a dominant player—you’d be somewhat crazy to enter there.”

This effect has been true even of large startups. Jet.com opened early last year as a Sam’s Club-style competitor to Amazon, attracting millions in venture capital and plenty of press coverage. And while it grew quickly, it did not last long as an independent company. Walmart bought Jet.com for $3 billion last August.

In the near term, the Whole Foods purchase worries some analysts most because it immediately gives Amazon another infrastructural advantage: more than 400 small warehouses, spread out across some of the most affluent (and Amazon-using) neighborhoods in the United States. They fret that Amazon’s logistical advantages—its network of warehouses, delivery routes, and cargo jets across North America—have given it an unbeatable advantage over other firms. And they argue that advantage was spawned not by technological innovation but by an unending stream of money from Wall Street.

These critics are calling for Amazon to receive a kind of scrutiny now rare in the United States. First, they say, the Securities and Exchange Commission should think hard before approving its purchase of Whole Foods. Second, politicians and regulators should look harder at its structure. They should ask themselves whether its integration is worth its cost—and then either restrict its integrate, essentially breaking Amazon up; or regulating and neutralizing its consolidation.

I asked Khan if she was really thinking about breaking up Amazon. “People have been timid and think that’s an extreme response,” she said. “But I think it’s worth noting that Amazon is expanding in an unprecedented way across our economy.”

She called back to its spiky share price this morning. “Investors know it’s monopolistic,” she told me. “That’s why it’s stock price has been so untethered from profits. The market can register a reality that our laws cannot.”

A Silicon Valley Congressman Takes On Amazon
June 19th, 2017, 02:45 AM

When Amazon announced last week that it intended to acquire the upscale grocery chain Whole Foods, it sent shockwaves through the grocery industry. Other grocers’ share prices plummeted. Analysts predicted Amazon would become a “top five” grocer within a few years. Synergies were imagined.

Within all the business chatter, however, a few policy wonks and at least one ally in Congress began to raise the antitrust alarm. They think Amazon is too powerful and might engage in anti-competitive practices.

On its face, and judged on the scale of recent jurisprudence, it’s not the most obvious antitrust situation. Amazon has a tiny slice of the grocery market. Whole Foods, large though it may loom in affluent cities, only has 1.2 percent market share. And while Amazon has a dominant position in e-commerce, e-commerce sales remain less than 10 percent of total retail receipts.

But freshman Congressman Ro Khanna, who represents the South Bay, including a big chunk of Silicon Valley, said that the Amazon-Whole Foods deal shows why the government should think differently about mergers. “This as a case study for how we think about antitrust policy,” he said. “It’s the particulars here.”

Khanna said that recent antitrust cases have turned on the question of whether a merger would, in point of fact, immediately raise prices for consumers. Drawing on the work of Matt Stoller and Lina Khan at the New America Foundation, he traced that very narrow test to Robert Bork’s The Antitrust Paradox, which was a move away from decades of more expansive thinking about industry concentration.

In this interview, Khanna calls for a “reorientation” of antitrust decision making to look at a much broader set of concerns, including the effect that a merger could have on jobs, wages, innovation, and small businesses. Whether he can get traction for this idea might be a bellwether for how well the populist wave in U.S. politics can translate into policy reprioritization.

This interview has been lightly edited and condensed.


Alexis Madrigal: Over the last few days, you’ve said that you’re “deeply worried” about the Amazon-Whole Foods deal. What’s drawn your attention to it?

Ro Khanna: I’m very concerned about it, especially the impact that it’s going to have on local grocers. The Walmarts and Targets already are putting pressure on grocers. And that is something in my district: For example, you have Felipe’s Produce in Sunnyvale and Cupertino. These local groceries have already faced so much pressure, and that’s gonna aggravate that situation. As you know, for many immigrant families, grocers are the route into the middle class and the path to wealth creation.

The second challenge to the merger is wages. Whole Foods has a record of paying people really well. One of their founders had a rule that the CEO shouldn’t be paid more than 20 times the average worker. Amazon has not had the same record. You could have downward pressure on wages. And Amazon is a large conglomerate and can leverage suppliers to lower prices, which creates downward pressure on suppliers’ workers wages, too.

If the only metric is “Is this gonna lower prices?”—if that’s the only criteria, that’s debatable. But we also need to consider the impact on local communities and the impact on innovation.

If you look across the economy, if you have multiple players in an industry, you have more customization, more innovation, greater choice for consumers. The more you have consolidation, the less likely you are to invest in innovation. It becomes all about driving down cost and mass production. And that’s not good for innovation in an industry.

Madrigal: The obvious counterargument that people have been making is that Whole Foods controls a teensy tiny fraction of the overall grocery market—1.2 percent,  according to research firm GlobalData.

Khanna: Well, the question is more, what is the potential for it to become? If you look at the past history in Amazon, they were willing to have losses for years to grow their position with the industry. The concern is there could be predatory pricing where they are able to absorb huge losses, which threatens other grocers.

And this has to be viewed not just in its implications for the grocery vertical, but is this amplifying Amazon’s online dominance into the physical retail space? It shouldn’t just be viewed as limited to groceries, but should be viewed in the broader context and Amazon playing into brick and mortar retail.

What I’ve said is that all of that has to be reviewed by the Department of Justice and the Federal Trade Commission to see what is the impact of such a merger given the market share that Amazon does have in many industries.

Madrigal: Who are you thinking through all these issues with? It seems as if there is a group of people in and around D.C. who are rethinking antitrust policy.

Khanna: I think there is a group. There is [Minnesota Congressman] Rick Nolan, who is interested in starting a monopoly caucus in Congress. He’s very concerned about the concentration in industries and the concentration of economic power and what that means for jobs

I’ve talked with [Massachusetts] Senator Elizabeth Warren in the context of defense contractors’ monopolization of the defense industry and what that means for prices.

Then [New America’s] Matt Stoller’s and Lina Khan’s work. Their work has gotten the attention of some of us in Congress that we need to reorient antitrust policy from the Robert Bork days, who made the whole thing a litmus test just about consumer prices, so that if something helps consumer prices, it can’t be an antitrust violation.

The problem is that there wasn’t a consideration of long-term price. Even if short-term consumers benefitted, long-term, there have been cases—airlines, ISP providers—where prices hurt consumers. And it didn’t consider the impact on wages and on local jobs and small businesses, who create most of the jobs. It didn’t take into account the impact on communities. I know what Felipe’s means to the family who created it and the community that it’s in.

Madrigal: Are there specific cases that show the way you think antitrust jurisprudence should be handled?

Khanna: There’s a 1966 Supreme Court case called United States v. Von's Grocery Co. The court blocked a merger between two grocery stores in Los Angeles to prevent a trend towards concentration. And the court said that the dominant theme in Congress was what was considered to be a rising tide of concentration in the American economy. It’s a Supreme Court case. Still good law. So, the courts have looked at economic concentration, particularly in grocery, and that’s a strain of jurisprudence that should be amplified. [From the decision: “The courts must be alert to protect competition against increasing concentration through mergers especially where concentration is gaining momentum in the market.”]

Madrigal: The argument that you’re making seems as if it could be extended to many other technology businesses. The online ad market, for example, is dominated by two companies, Google and Facebook. Are you pushing for tougher antitrust measures across the board?

Khanna: I think we need to have stronger antitrust enforcement. The biggest challenge in the internet space is the ISPs—AT&T, Comcast, Charter—and the fact that we’re paying fives times for access to the internet compared to Europe. There’s only five companies and not much choice because of the extraordinary infrastructure cost. And there is the airline industry. So, in general, we need stronger antitrust enforcement.

What makes the Amazon-Whole Foods deal so problematic is that they are going into an industry with large infrastructure, brick-and-mortar cost, and seeking to build consolidation where we already suffer from consolidation. It’s not like Walmarts and Targets have been good for wages or local grocery stores or niche producers. You already have a problem of concentration and this will just aggravate that.

Madrigal: But you’d like to see the antitrust decision-making overhauled.

Khanna: The big question that some of us in Congress are interested in is how do we reorient antitrust policy to consider all the factors of economic concentration. And consumer price and price discrimination is one factor. But there are also the loss of jobs, the impact on wages, the impact on local small businesses, and the impact on innovation within an industry.

And my point is that especially in a time with declining unionization, if you look at industries where they have numerous competitors and not a few big actors with high market concentration, there’s greater leverage for employees and wages, greater investment in innovation, greater leverage for suppliers, so less downward pressure on wages in supply chains. This is not universally true and there may be exceptions to that, but the FTC and DOJ need to consider all of these factors and make a holistic determination: Is a merger on balance helping wages, jobs, investment for innovation, and prices? Or is it, on balance, not?

And the problem of the current antitrust legislation is that it’s just a litmus test on prices and doesn’t consider all these other equally important factors. And that’s the really the philosophical debate between Brandeis and the consensus all the way from Theodore Roosevelt versus the shift to free-market absolutism that Robert Bork enabled.

Madrigal: Do have any hope that this kind of antitrust transformation will happen during this administration?

Khanna: I hope so. I hope the president is consistent with his campaign promises. He said he’d look at antitrust issues very seriously. Working families, or as he puts it, forgotten Americans, are being shafted by large banks and large corporations. And he campaigned as a populist on antitrust. No one is saying he should arbitrarily make a decision on antitrust, but he should put resources behind the DOJ and FTC to review these things. I have great confidence in the career civil servants at the DOJ and the FTC.

It’s my hope and I’m optimistic that there will be a review.

The Normalization of Conspiracy Culture
June 17th, 2017, 02:45 AM

Updated on June 17, 2017 at 7:51 p.m. ET

The catastrophe wasn’t what it seemed. It was an inside job, people whispered. Rome didn’t have to burn to the ground.

Nearly 2,000 years ago, after the Great Fire of Rome leveled most of the city, Romans questioned whether the emperor Nero had ordered his guards to start the inferno so he could rebuild Rome the way he wanted. They said the emperor had watched the blaze from the the summit of Palatine Hill, the centermost of the seven hills of Rome, plucking his lyre in celebration as countless people died. There’s no evidence of this maniacal lyre-playing, but historians today still debate whether Nero orchestrated the disaster.

What we do know is this: Conspiracy theories flourish when people feel vulnerable. They thrive on paranoia. It has always been this way.

So it’s understandable that, at this chaotic moment in global politics, conspiracy theories seem to have seeped out from the edges of society and flooded into mainstream political discourse. They’re everywhere.

That’s partly because of the richness of today’s informational environment. In Nero’s day, conspiracy theories were local. Today, they’re global. The web has made it easier than ever for people to watch events unfold in real time. Any person with a web connection can participate in news coverage, follow contradicting reports, sift through blurry photos, and pick out (or publish) bad information. The democratization of internet publishing and the ceaseless news cycle work together to provide a never-ending deluge of raw material that feeds conspiracy theories of all stripes.

From all over the world, likeminded people congregate around the same comforting lies, explanations that validate their ideas. “Things seem a whole lot simpler in the world according to conspiracy theories,” writes Rob Brotherton, in his book, Suspicious Minds: Why We Believe Conspiracy Theories. “The prototypical conspiracy theory is an unanswered question; it assumes nothing is as it seems; it portrays the conspirators as preternaturally competent; and as unusually evil.”

But there’s a difference between people talking about outlandish theories and actually believing them to be true. “Those are two very different things,” says Joseph Uscinski, a political science professor at the University of Miami and the co-author of the book American Conspiracy Theories. “There’s a lot of elite discussion of conspiracy theories, but that doesn’t mean that anyone’s believing them any more than they did in the past. People understand what conspiracy theories are. They can understand these theories as political signals when they don’t in fact believe them.”

And most people don’t, Uscinski says. His data shows that belief in partisan conspiracy theories maxes out at 25 percent—and rarely reach that point. Imagine a quadrant, he says, with Republicans on the right and Democrats on the left. The top half of the quadrant is the people of either party who are more likely to believe in conspiracy theories. The bottom half is the people least likely to believe them. Any partisan conspiracy theory will only resonate with people in one of the two top-half squares—because to be believable, it must affirm the political worldview of a person who is already predisposed to believe in conspiracy theories.

“You aren’t going to believe in theories that denigrate your own side, and you have to have a previous position of buying into conspiracy logic,” Uscinski says.

Since conspiracy theories are often concerned with the most visible concentration of power, the president of the United States is a frequent target. “So when a Republican is president, the accusations are about Republicans, the wealthy, and big business; and when a Democrat is president, the accusations focus on Democrats, communists, and socialists.”

“Right now,” he added, “Things are little different. Because of Donald Trump.”

As it turns out, the most famous conspiracy theorist in the world is the president of the United States. Donald Trump spent years spreading birtherism, a movement founded on the idea that his predecessor was born outside the country and therefore ineligible for the nation’s highest office. (Even when Trump finally admitted in September that he knew Barack Obama was born in the United States, he attempted to spark a new conspiracy.)

Now, Trump’s presidency is the focus of a range of conspiracies and cover-ups—from the very real investigation he’s under to the crackpot ideas about him constantly being floated by some of his detractors on the left. Like the implication that Paul Ryan and Mitch McConnell are involved in a money laundering scheme with the Russians, plus countless more theories about who’s funneling Russian money where and to whom.

“The left has lost its fucking mind, and you can quote me on that,” Uscinski said. “They spent the last eight years chastising Republicans about being a bunch of conspiracy kooks, and they have become exactly what they swore they were not. The hypocrisy is thick and it’s disgusting.”

Trump’s strategy in the face of all this drama has been to treat real and fake information interchangeably and discredit any report that’s unflattering to him. It’s why he refers to reputable news organizations as “fake news,” and why he brags about “going around” journalists by tweeting directly to the people. He wants to shorten the distance between the loony theories on the left and legitimate allegations of wrongdoing against him, making them indistinguishable.

Pushing conspiracy theories helped win Trump the presidency, and he’s now banking on the idea that they’ll help him as president. He’s casting himself as the victim of a new conspiracy—a “witch hunt” perpetrated by the forces that want to see him fail.

“Donald Trump communicates through conspiracy theories,” Uscinski says. “You can win the presidency on conspiracy theories, but it’s very difficult to govern on them. Because conspiracy theories are for losers, and now he’s a winner.”

What he means is, conspiracy theories are often a way of expressing an imbalance of power by those who perceive themselves to be the underdog. “But if you control the Supreme Court, the Senate, the House, and the White House, you can’t pull that,” Uscinski says. “Just like how Hillary Clinton can’t, in 1998, say her husband’s troubles are due to a vast right-wing conspiracy.”

Donald Trump may be the most famous conspiracy theorist in America, but a close second is the Infowars talk-radio personality Alex Jones, who has made a name for himself spewing reprehensible theories. He claimed the Sandy Hook Elementary School massacre was a hoax. He says 9/11 and the Boston Marathon bombings were carried out by the U.S. government. Jones has an online store where he peddles products like iodine to people prepping for the apocalypse.

Jones has long been a controversial figure, but not enormously well known. That’s changing. Jones was a vocal supporter of Trump, who has in turn praised Jones. “Your reputation is amazing,” Trump told him in an Infowars appearance in 2015. “I will not let you down.” Jones has claimed he is opening a Washington Bureau and considering applying for White House press credentials.

The latest Jones drama is a three-parter (so far): First, the NBC News anchor Megyn Kelly announced she had interviewed Jones, and that NBC would air the segment on Sunday, June 18. Next came the backlash: People disgusted by Jones blasted Kelly and NBC, saying a man whose lies had tortured the families of murdered children should never be given such a prominent platform. Even Jones joined the fracas, saying he’d been treated unfairly in the interview. Finally, on Thursday night, Jones claimed he had secretly recorded the interview, and would release it in full. (So far, he has released what seems to be audio from a phone conversation with Kelly that took place before the interview.)

Kelly has defended her decision to do the interview in the first place by describing Jones’s popularity: “How does Jones, who traffics in these outrageous conspiracy theories, have the respect of the president of the United States and an audience of millions?” The public interest in questioning a person like Jones, she argues, eclipses any worries about normalizing his outlandish views. The questions are arguably more valuable than the answers.

Many journalists agree with Kelly’s reasoning. But it’s also true, scholars say, that giving a platform to conspiracy theorists has measurable harmful effects on society.  In 1995, a group of Stanford University psychologists interviewed people either right before or right after they’d viewed Oliver Stone’s 1991 film JFK, which was full of conspiracy theories. Brotherton, who describes the findings in Suspicious Minds, says people leaving the movie described themselves as less likely to vote in an upcoming election and less likely to volunteer or donate to a political campaign, compared with those walking in. “Merely watching the movie eroded, at least temporarily, a little of the viewer’s sense of civic engagement,” Brotherton writes.

There are other examples of real-world consequences of giving platforms to conspiracy theorists, too. The conspiracy theory known as Pizzagate, which rose to prominence across websites like 4chan and niche conservative blogs, resulted in a man firing a weapon in a Washington, D.C., pizza parlor.  

The debate over Kelly’s interview comes on the heels of another high-profile conspiracy theory that sent shockwaves through conservative media circles. At the center of that scandal was the TV host Sean Hannity pushing a conspiracy theory about the unsolved murder of a Democratic National Committee staff member and an explosive Fox News report about the murder that was eventually retracted.

* * *

There’s a popular science-fiction podcast, Welcome to Night Vale, developed around the idea of life in a desert town where all conspiracy theories are true. It was first released in June 2012, the summer before a U.S. presidential election, at a moment when Trump was test-driving a new anti-Obama conspiracy. “I wonder when we will be able to see @BarackObama’s college and law school applications and transcripts,” he tweeted the day Night Vale launched. “Why the long wait?”

Joseph Fink, who co-created the podcast, says conspiracy theories today are continuing to function the way they always have. Conspiracy theories are easy ways to tell difficult stories. They provide a storyline that makes a harsh or random world seem ordered. “Especially if it’s ordered against you,” he says. “Since, then, none of it is your fault, which is even more comforting.”

“That said, more extreme conspiracy theories are becoming more mainstream, which is obviously dangerous,” Fink adds. “Conspiracy theories act in a similar way as religious stories: they give you an explanation and structure for why things are the way they are. We are in a Great Awakening of conspiracy theories, and like any massive religious movement, the same power it has to move people also is easily turned into a power to move people against other people.”

Look for the last awakening of this sort in the United States, and you’ll find a sea of similarities—of course, as conspiracy theories tell us, it’s easy to find connections when you go looking for them. Several scholars—people who focus on real conspiracies and people who study conspiracy theories—say the paranoia surrounding the Trump presidency evokes the tumult surrounding the Vietnam War. It’s not that conspiracy theories weren’t, at times, rampant before that. In the 1940s and 1950s, McCarthyism and the trial of Alger Hiss brought with them a surreal spate of hoaxes and misinformation. But it was the assassination of President John F. Kennedy that set off a “general sense of suspicion” that would permeate the culture for some time, says Josiah Thompson, the author of Six Seconds in Dallas: A Micro-Study of the Kennedy Assassination.

“Part of that was, what occurred almost immediately after the assassination, in the years afterward, was Vietnam,” Thompson said, “And over time, a complete loss of confidence in what ever the government was saying about Vietnam. That was not just from the presidency, that was from the government itself.”

This was also a period in which some of the most dramatic ideas that had been disparaged as conspiracy theories turned out to be true. “I am not a crook,” Nixon had insisted. Less than a year later, he resigned. Nixon and Trump are compared not infrequently. Not all presidents are so thin-skinned and antagonistic to the press. Jennifer Senior, reviewing a recent Nixon biography, wrote that “the similarities between Nixon and Trump leap off the page like crickets.” Nixon may have been increasingly paranoid in the final months of his presidency, but he didn't have access to the technology that Trump uses to showcase his conspiracy mindedness.

“With real conspiracy theorists, there’s a kind of—how to put it—almost a dialectic operative,” Thompson says. “Like Trump. You have to keep making wilder and wilder pronouncements over time to hold your audience.”

I tell Thompson the idea Uscinski had shared, about how a person can win the presidency on conspiracy theories, but how they don’t work so well once you’re president. He seems to agree. “In a campaign, what you’re trying to do is affect people’s opinions that will be harvested on one day,” he said. “But governing doesn’t have to do with people’s opinions. It has to do with facts. That’s the real difference.”

When the facts are disputed, of course, you do the best you can with the evidence you can find. Josiah Thompson, the author of Six Seconds in Dallas: A Micro-Study of the Kennedy Assassination, has spent years thinking about all this. When I bring up the enormity of unknown unknowns in people’s understanding of history, Thompson quotes the writer Geoffrey O’Brien:  “‘History unfolds as always in the midst of distraction, misunderstanding, and partially obscured sight-lines,’” Thompson says, reading a line from O’Brien’s 2016 review of the novel Black Deutschland by Darryl Pinckney.*

“And that’s the trouble,” Thompson says. “What may appear as conspiracy theory at one point turns out to be truth at another.”

I ask Thompson how sure he is about the official explanation of the JFK assassination, that there was one gunman who fired on the president’s motorcade from the Texas School Book Depository.

Thompson believes, based on controversial acoustic evidence, that on November 22, 1963, a shot was fired from the grassy knoll at Dealey Plaza—not just from the depository. “The acoustics give us a kind of template for how the event occurred—these two flurries of shots, separated by about six seconds.” (Thompson later clarified that he believes the flurries of shots were 4.6 seconds apart.) He says it was two shots in the second flurry that killed Kennedy.**

Thompson pauses.

“Does that make me a conspiracy theorist?”

He laughs.

“After all these years? What do you think?”


* This article originally quoted Josiah Thompson as having said, “history unfolds, as always, in the midst of distraction, misunderstanding, and partially obscured sight-lines.” After publication, Thompson clarified that he had been quoting the New York Review of Books writer Geoffrey O’Brien, who first wrote the line in his review of the Darryl Pinckney novel Black Deutschland.

** Thompson clarified after publication that he believes the flurries of shots in the Kennedy assassination were 4.6 seconds apart, not six seconds apart. He believes Kennedy was killed by two shots in the second flurry, not by the two flurries of shots.