Technology | The Atlantic
Trolls Are Winning the Internet, Technologists Say
March 29th, 2017, 04:11 PM

I’m going to confess an occasional habit of mine, which is petty, and which I would still enthusiastically recommend to anyone who frequently encounters trolls, Twitter eggs, or other unpleasant characters online.

Sometimes, instead of just ignoring a mean-spirited comment like I know I should, I type in the most cathartic response I can think of, take a screenshot, and then file that screenshot away in a little folder that I only revisit when I want to make my coworkers laugh.

I don’t actually send the response. I delete my silly comeback and move on with my life. For all the troll knows, I never saw the original message in the first place. The original message being something like the suggestion, in response to a piece I once wrote, that there should be a special holocaust just for women.

It’s bad out there, man!

We all know it by now. The internet, like the rest of the world, can be as gnarly as it is magical.

But there’s a sense lately that the lows have gotten lower, that the trolls who delight in chaos are newly invigorated and perhaps taking over all of the loveliest, most altruistic spaces on the web. There’s a real battle between good and evil going on. A new report by the Pew Research Center and Elon University’s Imagining the Internet Center suggests that technologists widely agree: The bad guys are winning.

Researchers surveyed more than 1,500 technologists and scholars about the forces shaping the way people interact with one another online. They asked: “In the next decade, will public discourse online become more or less shaped by bad actors, harassment, trolls, and an overall tone of griping, distrust, and disgust?”

The vast majority of those surveyed—81 percent of them—said they expect the tone of online discourse will either stay the same or get worse in the next decade.

Not only that, but some of the spaces that will inevitably crop up to protect people from trolls may contribute to a new kind of “Potemkin internet,” pretty façades that hide the true lack of civility across the web, says Susan Etlinger, a technology industry analyst at the Altimeter Group, a market research firm.

“Cyberattacks, doxing, and trolling will continue, while social platforms, security experts, ethicists, and others will wrangle over the best ways to balance security and privacy, freedom of speech, and user protections. A great deal of this will happen in public view,” Etlinger told Pew. “The more worrisome possibility is that privacy and safety advocates, in an effort to create a more safe and equal internet, will push bad actors into more-hidden channels such as Tor.”

Tor is software that enables people to browse and communicate online anonymously—so it’s used by people who want to cover their tracks from government surveillance, those who want to access the dark web, trolls, whistleblowers, and others.  

“Of course, this is already happening, just out of sight of most of us,” Etlinger said, referring to the use of hidden channels online. “The worst outcome is that we end up with a kind of Potemkin internet in which everything looks reasonably bright and sunny, which hides a more troubling and less transparent reality.”

The uncomfortable truth is that humans like trolling. It’s easy for people to stay anonymous while they harass, pester, and bully other people online—and it’s hard for platforms to design systems to stop them. Hard for two reasons: One, because of the “ever-expanding scale of internet discourse and its accelerating complexity,” as Pew puts it. And, two, because technology companies seem to have little incentive to solve this problem for people.

“Very often, hate, anxiety, and anger drive participation with the platform,” said Frank Pasquale, a law professor at the University of Maryland, in the report. “Whatever behavior increases ad revenue will not only be permitted, but encouraged, excepting of course some egregious cases.”

News organizations, which once set the tone for civic discourse, have less cultural importance than they once did. The rise of formats like cable news—where so much programming involves people shouting at one another—and talk radio are clear departures from a once-higher standard of discourse in professional media. Few news organizations are stewards for civilized discourse in their own comment sections, which sends mixed messages to people about what’s considered acceptable. And then, of course, social media platforms like Facebook and Twitter serve as the new public square.

“Facebook adjusts its algorithm to provide a kind of quality—relevance for individuals,” said Andrew Nachison, the founder of We Media, in his response to Pew. “But that’s really a ruse to optimize for quantity. The more we come back, the more money they make... So the shouting match goes on.”

The resounding message in the Pew report is this: There’s no way the problem in public discourse is going to solve itself. “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day,” said Randy Bush, a research fellow at Internet Initiative Japan, in his response to Pew.

Many of those polled said that we’re now witnessing the emergence of “flame wars and strategic manipulation” that will only get worse. This goes beyond obnoxious comments, or Donald Trump’s tweets, or even targeted harassment. Instead, we’ve entered the realm of “weaponized narrative” as a 21st-century battle space, as the authors of a recent Defense One essay put it. And just like other battle spaces, humans will need to develop specialized technology for the fight ahead.

Researchers have already used technology to begin to understand what they’re up against. Earlier this month, a team of computer scientists from Stanford University and Cornell University wrote about how they used machine-learning algorithms to forecast whether a person was likely to start trolling. Using their algorithm to analyze a person’s mood and the context of the discussion they were in, the researchers got it right 80 percent of the time.     

They learned that being in a bad mood makes a person more likely to troll, and that trolling is most frequent late at night (and least frequent in the morning). They also tracked the propensity for trolling behavior to spread. When the first comment in a thread is written by a troll—a nebulous term, but let’s go with it—then it’s twice as likely that additional trolls will chime in compared with a conversation that’s not led by a troll to start, the researchers found. On top of that, the more troll comments there are in a discussion, the more likely it is that participants will start trolling in other, unrelated threads.

“A single troll comment in a discussion—perhaps written by a person who woke up on the wrong side of the bed—can lead to worse moods among other participants, and even more troll comments elsewhere,” the Stanford and Cornell researchers wrote. “As this negative behavior continues to propagate, trolling can end up becoming the norm in communities if left unchecked.”

Using technology to understand when and why people troll is essential, but many people agree that the scale of the problem requires technological solutions. Stopping trolls isn’t as simple as creating spaces that prevent anonymity, many of those surveyed told Pew, because doing so also enables “governments and dominant institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech, and shape social debate,” Pew wrote.

“One of the biggest challenges will be finding an appropriate balance between protecting anonymity and enforcing consequences for the abusive behavior that has been allowed to characterize online discussions for far too long,” Bailey Poland, the author of “Haters: Harassment, Abuse, and Violence Online,” told Pew. Pseudonymity may be one useful approach—so that someone’s offline identity is concealed, but their behavior in a certain forum over time can be analyzed in response to allegations of harassment. Machines can help, too: Chatbots, filters, and other algorithmic tools can complement human efforts. But they’ll also complicate things.

“When chatbots start running amok—targeting individuals with hate speech—how will we define ‘speech’?” said Amy Webb, the CEO of the Future Today Institute, in her response to Pew. “At the moment, our legal system isn’t planning for a future in which we must consider the free speech infringements of bots.”

Another challenge is that no matter what solutions people devise to fight trolls, the trolls will fight back. Even among those who are optimistic that the trolls can be beaten back, and that civic discourse will prevail online, there are myriad unknowns ahead.

“Online discourse is new, relative to the history of communication,” said Ryan Sweeney, the director of analytics at Ignite Social Media, in his response to the survey. “Technological evolution has surpassed the evolution of civil discourse. We’ll catch up eventually. I hope. We are in a defining time.”

Encryption Won’t Stop Your Internet Provider From Spying on You
March 29th, 2017, 04:11 PM

Earlier this month, a lobby group for major internet providers like Comcast and Verizon attacked a set of online-privacy regulations that they believe are too strict. In a filing to the Federal Communication Commission, the group argued that providers should be able to sell customers’ internet history without the customers’ permission, because that information shouldn’t be considered sensitive. Besides, the group contended, web traffic is increasingly encrypted anyway, making it invisible to providers.

It’s certainly true that encryption is on the rise online. Data from Mozilla, the company behind the popular Firefox browser, shows that more than half of web pages use HTTPS, the standard way of encrypting web traffic. When sites like The Atlantic use HTTPS, a lock icon appears in users’ web browsers, indicating that the information being sent to and from servers is scrambled and can’t be read by a third party that intercepts it—that includes ISPs.

But even if 100 percent of the web were encrypted, ISPs would still be able to extract a surprising amount of detailed information about their customers’ virtual comings and goings. This is particularly significant in light of a bill that passed Congress this week, which granted the lobby group’s wish: It allows ISPs to sell their customers’ private browsing history without their consent.

Although the exact URL of a page accessed through HTTPS is hidden to the provider, the provider can still see the domain the URL is on: For example, your ISP can’t tell what exactly story you’re reading right now, but it can tell that you’re somewhere on theatlantic.com. That may not reveal much other than your (excellent) taste in news sources—but a user who visited a page on plannedparenthood.com and then a page on dcabortionfund.com may have revealed much more sensitive information.

That’s an example from a 2016 report prepared by Upturn, a think tank that focuses on civil rights and technology. The Upturn report also sets out some of the sneaky ways that user activity can be decoded based only on the unencrypted metadata that accompanies encrypted web traffic—also known as “side channel” information. (These methods probably aren’t widely in use right now, but they could be deployed if ISPs decided it’s worthwhile to try and learn more about encrypted traffic.)

Website fingerprinting, for example, relies on the unique characteristics of a particular web page to reveal when it’s being accessed. When a user visits a page, his or her browser pulls data from various servers in a particular order. Based on that pattern, a network provider might be able to tell what page the user is visiting, even without having access to any of the actual data streams it’s transporting. (For this to work, the network operator would have to have already analyzed the loading pattern associated with the particular website the user is visiting.)

In November, a group of researchers from Israel’s Ben-Gurion and Ariel Universities demonstrated a way to extend the idea behind website fingerprinting to videos watched on YouTube. By matching the encrypted data patterns created by a user viewing a particular video to an index they’d created previously, they could tell what video the user was watching from within a limited set, with a startling 98 percent accuracy.

Ran Dubin, a Ph.D. candidate at Ben Gurion and the research paper’s primary author, told me that the discovery came out of work he’d been doing to optimize video streaming. He wanted to know if he could figure out the quality at which users were watching YouTube videos, so he analyzed the way devices received data as they streamed.

He quickly realized he’d stumbled into something bigger. “The network patterns that belong to each video title have very, very strong meaning,” Dubin said. “I found out that I could actually recognize each stream.”

The giveaway, he found, was embedded in the way devices choose a bitrate—an indicator of video quality—at which to stream the video. At the beginning of a stream, the player receives quick spurts of data, which begin to space apart after the video has been playing for a while and the player has settled on a bitrate. The pattern of these spikes helps identify each individual video.

The researchers assembled fingerprints from 100 YouTube videos by using a browser crawler to automatically download each video under various network conditions, then cataloguing the resulting data pattern. Next, they analyzed the traffic patterns created by a device as it played one of 2,000 videos—including the 100 target videos. Using an algorithm to match the stream to the nearest fingerprint, the researchers could tell when one of the target videos was being watched. Not once was a video outside the set of 100 accidentally identified as inside the target set.

The technique could be used by law enforcement to identify users who are watching ISIS propaganda videos, Dubin said. It could also be used to compile data on users’ viewing habits and sell it to advertisers—and that’s where the privacy rules that just passed Congress come in.

If President Trump signs the bill, ISPs will have free rein to sell data they gather on their customers without asking for consent. As online encryption spreads further and further across the internet, there will be monetary incentives to dig up as much information on users as possible, to offset the loss of access to more detailed unencrypted data. Tricks like Dubin’s, which might have otherwise been too costly and inconvenient to put in place, could become an attractive way to glean valuable information about user habits and turn them over to advertisers for big money.

Escaping Is Not a Form of Understanding
March 29th, 2017, 04:11 PM

The little transgressions are the forgivable ones. Local knowledge in any place is earned with time. So it’s understandable why someone who is only visiting Hawaii might think to describe poke as “sashimi salad,” for example, though that’s not quite right.

But then there are the big transgressions, the characterizations of a place that are so unmoored from a sense of history that it’s almost shocking.

Almost. But Hawaii has seen it all before.

The Hawaii Cure,” a feature published March 21 by The New York Times Magazine, treads a well-worn path of colonialist tropes as a writer indulges his escapism fantasies with a trip to Hawaii. That’s nothing new. Yet in the internet age, a lighthearted essay can travel quickly back home and elicit a scathing response from the people who live in the place it depicts. Dozens of Hawaii people I know from when I lived on Oahu responded to the essay—in text messages, online chats, and Facebook comments, to me and to one another, with messages like: “Not today, Satan,” and “I like that you have the print version so you can BURN IT,” and a keyboard-smashing “owfi;ds'pfwePDKFMQE;LFSGKDFJ.” Let’s just say the emoji responses were not kind either.

The travel essay, as a form, is particularly fraught in places where indigenous groups were displaced by colonialism. Theodore Roosevelt’s writings on Africa, for example, were deeply influential in shaping global perceptions of a place that he described as having “the spectacle of a high civilization all at once thrust into and superimposed upon a wilderness of savage men and savage beasts.”

Africa was viewed as a vehicle for escapism for Roosevelt and other writers, including the many inspired by him who would follow suit. Travel writing was, for a time, one of the main ways people learned about distant cultures.

“Certain places seem to exist mainly because someone has written about them,” Joan Didion wrote, in her own essay about Hawaii, published in The White Album in 1979. “Kilimanjaro belongs to Ernest Hemingway. Oxford, Mississippi, belongs to William Faulkner...” Coming from a writer’s writer, like Didion, who is herself a dedicated Hemingway fan, this seems to be meant as a compliment. But it is her use of the word “belongs” that hangs on the page.

Travel writing is traditionally concerned with the writer’s sense of belonging, or lack thereof—the spectacle of being somewhere new, the sense of displacement one feels. Focus on your own sense of self in a place where questions of belonging are at the heart of local politics and culture, however, and you risk misunderstanding  the place entirely. Escaping is not a form of understanding, anyway.

“It’s worth noting,” writes David M. Wrobel in his book Global West, American Frontier: Travel, Empire, and Exceptionalism from Manifest Destiny to the Great Depression, “that Roosevelt overtly insisted that politics, whether domestic or foreign, not intrude in his African experience.”

Which brings us back to “The Hawaii Cure,” billed as  “a first trip to the island, in a desperate bid to escape the news,” but with no hint at the short distance between escapism and exploitation in the history of Hawaii, or in the history of travel writing for that matter.

“Can it be true?” the author asks, “The aloha spirit is real? Paradise on earth? An Eden of happy Americans moated from our national ravages of malevolence, contempt, uncertainty and fear?”

There are deep and complicated tensions in these questions. Hawaii is beautiful, yes, but it is not simply an “Eden of happy Americans.” Though many people in Hawaii are proud of its nearly 58 years of statehood, others don’t consider themselves to be American at all. The state’s economy is hugely dependent on both tourism and federal jobs, both of which can be viewed as complicit in a form of settler colonialism that shapes the way people perceive and experience life in Hawaii. This is heavy stuff, and worthy of consideration by all Americans, especially those who visit Hawaii.

The Times story doesn’t go there. Instead, it begins with a stereotype, a reference to Polynesians overeating. Its first scene takes place at a commercial luau. And though the author, Wells Tower, hints that he’s somehow in on the joke—it’s not totally clear what he’s lambasting. (In response to my interview requests for Tower and his editor, The New York Times told me they had “no comment.”) There are moments of self-deprecation in the essay, but the prevailing tone is one that supports the idea that Hawaii is, as Tower puts it, “a magical land where the laws of physics bend toward human satisfaction.”

Along with the idea that Hawaii exists to please outsiders is the recurring theme that it’s still never good enough. “Hankering after something incontestably Hawaiian, you end up on a charter bus bound for the Chief’s Luau at Sea Life Park 15 miles east on the Kalanianaole Highway. Never mind that what is most purely Hawaiian about the luau is its proficiency at extracting tourists’ dollars.”

Now that is something.

You might argue that whatever it was that was most “purely Hawaiian” is long gone, perhaps lost when Hawaii was first invaded by colonialists, or when the Hawaiian Kingdom was overthrown by outsiders in 1893, or when the United States annexed Hawaii in 1898, or when steamships gave way to commercial airplanes and ever-more hotels blossomed along the Waikiki coast. An allusion to anything being “purely Hawaiian,” if such a designation could be made, seems tone deaf in a place where some people’s housing is still determined by blood-quantum laws.

Tower goes on to say he always assumed Hawaii was a “meretricious luxury product,” worthless unless you quantified your own happiness in dollars. But on this trip, he says, he is ready to go to a place that is “notoriously nice.”    

“Give me a slack-keyed, macadamia-dusted holiday,” he writes, “where things are pretty and people are smiling, if only because it’s in their job description.”

The people we meet in the story, however, often come across as caricatures. There is the “tanned, professional butt” of a young woman on the beach. And later, “this coconut man (the second in our mounting tally),” who tells an old story about coconut water being used in place of blood transfusions during World War II. “I have heard this fable before and know it to be hogwash, but I say, ‘Oh, wow,’ and await my $10 change that does not appear to be forthcoming,” Tower writes. Eventually, he gets his change and departs, “full of gratitude for this fellow, not only because his coconuts are very fine, but for nipping a budding and inconvenient fancy that I might like to live here on the Big Island. His brand of coconut palaver is, I suspect, common in these parts.”

Is it at all possible that this particular brand of “coconut palaver” is just a guy who sells coconuts, and that he just sold you two coconuts, and that’s basically it? No matter. To the visitor, this encounter is strange and undesirable: “Encountering it on any sort of regular basis, straight-world mainlander that I am, would drive me out of my mind.”

The writer concedes to being awe-inspired by the sight of Kilauea, the long-erupting volcano on the Big Island, but then describes the lava flow as “newborn wads of America,” which is not exactly the tenor of respect that one might expect for a sacred site. (It’s also weirdly nationalistic language for describing a geologic phenomena.)

As it happens, that section of the story contained a translation snafu, an understandable mistake—we all make them—and had a correction appended: “Because of an editing error, an earlier version of this article gave an incorrect English translation of the Hawaiian word ‘Kilauea.’ It is ‘much spreading,’ not ‘mush spreading.’”

Leaving readers with an image of spreading mush, however, seems about right.  

Who Owns Your Face?
March 27th, 2017, 04:11 PM

It takes a feast of facial imagery to teach a machine how to recognize an individual person.

This is why computer scientists so often use the faces of Hollywood celebrities in their research. Tom Hanks, for example, is in so many publicly available photographs that it’s fairly easy to build a Hanks database for algorithm-training purposes.

Depending on a researcher’s needs, there are many other available databases of human faces—some featuring tens of thousands of images. These collections of faces draw from public records like mugshots, surveillance footage, news photos, Google images, and university studies.

It’s entirely possible that your face is in one of these databases. There’s no way to say for certain that it isn’t.

Your face is yours. It is a defining feature of your identity. But it’s also just another datapoint waiting to be collected. At a time when cameras are ubiquitous and individual data collection is baked into nearly every transaction a person can make, faces are increasingly up for grabs.

Data brokers already buy and sell detailed profiles that describe who you are. They track your public records and your online behavior to figure out your age, your gender, your relationship status, your exact location, how much money you make, which supermarket you shop at, and on and on and on. It’s entirely reasonable to wonder how companies are collecting and using images of you, too.

Facebook already uses facial recognition software to tag individual people in photos. Apple’s new app, Clips, recognizes individuals in the videos you take. Snap’s famous selfie filters work by mapping detailed points on individual users’ faces. (Snap says on its website that its technology doesn’t take the additional step of recognizing the faces it maps.) That’s similar to how software by the Chinese startup Face++ works.  Its software maps dozens of points on a person’s face, then stores the data it collects. The idea is to be able to use facial recognition systems for keyless entry to office buildings and apartment complexes, for example. Jie Tang, an associate professor at Tsinghua University, described to MIT Technology Review how he uses his faceprint to pay for meals: “Not only can he pay for things this way, he says, but the staff in some coffee shops are now alerted by a facial recognition system when he walks in,” and they greet him by name.

It’s understandable, then, that as these technologies rapidly advance, they have become fodder for some conspiracy theories—like the unsubstantiated claim that Snap is building a secret facial recognition database with the images of people who use its popular Snapchat app.

But such conspiracies aren’t as outlandish as they’re made out to be. Experts have been warning against facial-recognition systems for decades. The F.B.I.’s latest facial recognition tools give the agency the ability to scan millions of photos of ordinary Americans. “To be clear, this is a database—or a network of databases—comprised primarily of law-abiding Americans,” said Congressman Jason Chaffetz, a Utah Republican, in a House Committee on Oversight and Government Reform hearing on Wednesday.  “Eighty percent of the photos in the F.B.I.’s facial recognition network are of non-criminal entries.” The F.B.I. is able to access images from driver’s licenses in at least 18 states, as well as millions of mugshots.

“Most people have no idea that this is happening,” said Alvaro Bedoya, the executive director of the Center on Privacy and Technology at Georgetown Law, in testimony at the hearing. “The latest generation of this technology will allow law enforcement to scan the face of every man, woman, and child walking in front of a street surveillance camera… Do you have the right to walk down the street without the government secretly scanning your face? Is it a good idea to give government so much power with so few limits?”

The accuracy of the agency’s system is also a matter of debate. According to Chaffetz, roughly one in seven searches of the FBI system returned a list of entirely innocent candidates, even though the actual target was in the database. And the agency doesn’t track its own rate of false positives, according to the Government Accountability Office, which underscores one of the most troubling scenarios for how facial recognition technology could create problems. “It would be one thing if facial-recognition technology were perfect, or near-perfect,” Chaffetz said, “but it clearly is not.”

“An inaccurate system will implicate people for crimes they didn’t commit,” said Jennifer Lynch, an attorney for the Electronic Frontier Foundation, in testimony at the hearing, “and it will shift the burden onto innocent defendants to show that they are not who the system says they are. This threat will disproportionately impact people of color. Face recognition misidentifies African Americans and ethnic minorities at higher rates than whites.”

The F.B.I. insists that its use of facial recognition technology is for investigative leads only, and that a faceprint isn’t so different than a fingerprint—or an in-person line-up for that matter. Facial recognition software is merely an extension of the work that law enforcement already does, said Kimberly Del Greco, the deputy assistant director at the F.B.I.’s Criminal Justice Information Services Division

“It is a search of law enforcement photos by law enforcement agencies for law enforcement purposes,” she said in the hearing. “Law enforcement has performed photo lineups and manually reviewed mugshots for decades. Face recognition software allows this to be accomplished in an automated manner.”

Not just automated, but automatic—meaning that camera systems outfitted with facial recognition software would identify anyone in the frame. Privacy advocates and members of Congress agree that what happens to this sort of data is a complicated and urgent question. A smart surveillance camera may capture and identify attendees of a political rally, for instance, which could have a chilling effect on civic participation.

Eventually, it may be impossible for people to avoid targeted surveillance. “We might get to a point where you just don’t have the option to opt out,” said Alvaro Hoyos, the chief information security officer at the password and identity management firm OneLogin. “People might not want to think about or talk about it, but we’re going toward a state of constant surveillance.”

Actually, we may already be there.

Data-tracking systems are already able to follow your behavior—online and off—to produce a detailed portrait of you. “Websites do it already, but there’s a perception of the anonymity of being behind your keyboard,” Hoyos told me. And yet, he says, “there’s something about the human image, your image, that is a lot more intimate to us than pretty much anything else. It’s who you are.”

Is Trump Still Tweeting From His Unsecured Android Phone?
March 27th, 2017, 04:11 PM

There are two personalities on display in Donald Trump’s Twitter feed. One Trump generally spells things correctly, tweets flattering news stories, and politely thanks visitors for meeting with him. The other Trump is easily provoked, capitalizes random words, and lashes out in real time at things that annoy him.

These two genres of tweets generally come from two different devices—an Android phone and an iPhone—and thus presumably from different people. Last year, David Robinson, a data scientist at Stack Overflow, poked through months of Trump’s timeline and found that tweets from the Android phone were far more negative than the bland iPhone tweets. Trump uses an Android phone as his personal device, suggesting that he was behind the angrier tweets; the iPhone tweets probably came from staff.

To help The Atlantic’s journalists guess whether any given tweet comes from an aide or straight from Trump’s own two thumbs, my colleague Andrew McGill created a new channel called #trumptweets in Slack, the chat platform our newsroom uses to communicate. Every time the president of the United States sends a 140-character missive to the world, it lands in our Slack channel, with an extra piece of information: whether the tweet was sent from an iPhone or an Android device. (Tweets always include hidden metadata that indicate what device or software they were sent from.)

The trend Robinson discovered held up for many months. Angry tweets like these would arrive in the wee hours from the presidential Samsung Galaxy. In the afternoon, staid iPhone tweets respectfully thanked crowds at rallies for Making America Great Again.

Then, suddenly, the flow of Android tweets dried up. After an uncharacteristically dry tweet on March 8 about a workforce report from LinkedIn (the tweet did not have a link, however, true to form), two and a half weeks passed without another from the Android.

The presidential tweets kept coming, of course—and coming and coming and coming—but they were all posted from an iPhone. Oddly, though, the feed’s split personality didn’t fade. Tweets like this gem from Thursday, March 23, which decried the “totally biased and fake news reports of the so-called Russia story on NBC and ABC,” came from an iPhone. (That message was punctuated with a particularly Trumpian coda: “Such dishonesty!”) But so too did tweets like this one from just two days prior, which included images and the phrase “ingrained in our nation’s fabric,” two dead giveaways that the tweet came from staff.

What happened? Some reporters speculated that Trump may have finally gotten an iPhone. Perhaps that old, unsecured Samsung—so old, in fact, it can’t run the newest, safest version of the Android operating system—had been wrested from Trump’s fingers from a brave, security-conscious aide.

That would’ve been a relief to security experts who had warned for months about the dangers of Trump’s attachment to that phone. The device, likely a Samsung Galaxy S3, has such serious security problems that it’s probably “compromised by at least one—probably multiple—hostile foreign intelligence services and is actively being exploited,” according to Nicholas Weaver, a security researcher at the University of California, Berkeley. That would mean that foreign agents could be listening on his private conversations, monitoring him as he moves around the country, and potentially seeing the world through the eyes of his smartphone camera.

But lo, the Android tweets are back. The spectacular collapse of the American Health Care Act, Trump’s response to Obamacare, was enough to bring the old device out from retirement this weekend. Here’s what the president tweeted from an Android on Saturday:

It’s not certain that Trump picked up the old Android phone he’d been carrying around throughout the campaign to send off that tweet and another just minutes later. It could be that Trump got a newer, more secure Android phone, or—ideally—a specially secured smartphone issued by the intelligence community. But given the White House’s disinterest in policing Trump’s Android during the early months of his presidency, it’s unclear what would have prompted a sudden technology upgrade. (One potential hint: WikiLeaks released details about classified CIA hacking tools the day before Trump’s Android tweets went dark for a couple weeks.)

The White House did not respond to repeated requests for comment. At press time, Trump had only fired off two tweets from the Android before the iPhone tweets took over again, including one that include a shorthand for “Obamacare” that the president has used before:

So what’s going on when Trump-like tweets come from the iPhone? There are a few possibilities. Maybe Trump really did get an iPhone, which he now uses as his primary tweeting tool. Perhaps he’s taken to dictating his tweets, idiosyncrasies and all, to an iPhone-toting staffer. Or maybe his aides have gotten really good at tweeting in Trumpian English, and have been given permission to get a little edgy with the @realDonaldTrump account. At a time when members of his own party have called on Trump to stop tweeting so much, sending spicier iPhone tweets may be the kind of concession that could keep the president off of Twitter.

Trump also has control of the official @POTUS account, which he took over from Barack Obama on inauguration day, but that one is run by Dan Scavino, the White House social-media director. Tweets on that account from the president are signed “DJT,” but he’s only put his name on a handful of them, none of which really sound like they came from him.

It could be argued that Trump’s tweets are themselves a threat to national security. But if at least they came from a secure device, the president may be safer from surveillance than he is now. Until it’s clear that he’s turned in his old Samsung Galaxy—an unlikely prospect after this weekend’s tweets—it’s still possible that hacker with access to basic exploits could gain access to Trump’s every movement and word.

Can Uber Survive Without Self-Driving Cars?
March 26th, 2017, 04:11 PM

In the era of self-driving cars, a scary but otherwise uneventful car crash can be huge news. This was the case in Tempe, Arizona, on Friday, when an Uber self-driving car was hit so hard that it rolled onto its side. There were no serious injuries reported.

Uber has grounded its fleet of self-driving cars in Arizona as a result, a spokeswoman for the company told me. “We are continuing to look into this incident, and can confirm we had no backseat passengers in the vehicle,” an Uber spokesperson said in a statement provided to The Atlantic. Uber also suspended testing of its self-driving vehicles in Pittsburgh and San Francisco “for the day, and possibly longer,” The New York Times reported. In addition to its global ride-hailing service, Uber has been testing its self-driving car technology on public roads in Arizona, Pennsylvania, and California for several months.

The vehicle involved in the Arizona crash was in autonomous mode at the time of the collision—meaning the car was driving itself with a human riding behind the wheel—but police in Tempe say Uber wasn’t to blame for what happened. A human-driven vehicle failed to yield at a traffic signal, and collided with the Uber SUV, police said in local news reports.

The incident is a reminder of the need for this technology in the first place: Humans are abysmally bad drivers. But it’s also a reminder of how much Uber has riding on the success of self-driving cars.

And how much is that? Everything, basically.

If self-driving cars are adopted on a mass scale and Uber isn’t leading the way, its current business—which revolves around humans driving cars—is made obsolete. But if Uber finds a way to dominate in the development of self-driving cars, it can remove those costly human drivers from its business model—a scenario that could mean a windfall for Uber. Succeeding on this front “is basically existential for us,” Uber’s CEO, Travis Kalanick, told Bloomberg Businessweek in August.

The ride-sharing company is uniquely positioned in the self-driving car space this way. Google’s self-driving car project, now rebranded as Waymo, could fail and its parent company would still have a massively profitable search engine to fall back on.

“Every other company isn’t betting the company’s future on self-driving cars,” said Arun Sundararajan, a professor at New York University’s Stern School of Business. “Google will be fine. Uber is the one company in the world who has really made this all-or-nothing bet.”

The crash in Tempe is unlikely to derail Uber’s self-driving car aspirations—especially if the incident played out the way law enforcement described, and Uber was not at fault. But Uber is facing several other major problems. For one, there’s the federal lawsuit.

Waymo is suing Uber for intellectual property theft, claiming that an Uber engineer who used to work for Google stole 10 gigabytes of “highly confidential data” from Google’s servers, then used it to copy Google’s designs for a self-driving car. The lawsuit is a “particularly damaging development for Uber,” given how much is at stake, Sundararajan told me.

There are also reports of internal strife on Uber’s self-driving car team, which has escalated into a “mini civil war,” according to the tech-news site Recode. Leaders from Uber’s self-driving car unit gathered in San Francisco for a “critical summit” on the matter last week, hoping to solve leadership problems and figure out a way to stem departures from the embattled company. “With every engineer that defects,” Recode wrote, “Uber is feeding the fire of its competitors, which are growing by the day, both big and small.”

“Uber has tied their fortunes to the imminent arrival of fully autonomous cars, which is highly risky,” Sundararajan told me. “It’s an outcome with a lot of variability, right? It could be three years, it could be 10 years.”

In other words, anything that gets in the way of Uber’s work on self-driving cars, he says, is “particularly troubling” for the company’s survival.

Not Wanting to Be a Token in Tech
March 24th, 2017, 04:11 PM

Two readers are very wary of hiring practices in Silicon Valley that strongly take gender into account. Here’s Sally:

This article [“Why Is Silicon Valley So Awful to Women?”] refers a couple times to people saying that hiring women or minorities may “lower the bar” as some kind of evidence of bias. But usually when people say that, they are referring to using gender as a criteria for hiring. When you do that, you have to give less weight to technical merit.

And indeed, towards the end of the article, using such criteria is advocated. Whenever you set a “goal” (i.e. quota) that 40 percent of your workforce should have quality X when X has nothing to do with your ability, you are going to get people with lower-than-average ability. What’s worse, you have a situation where those in the company with quality X have less ability than those without that quality, which only reinforces the stereotypes about those people—which is unfair to those Xs who are competent.

Personally, I’d much rather companies focus on treating their female employees equally than worry about increasing the number of female employees. But that’s just me.

It’s also Carla Walton, a female engineer in HBO’s Silicon Valley:

More of Carla vs. Jared here. This next reader has an outlook and attitude similar to Sally’s:

I’m a senior tech executive in Silicon Valley who happens to be female. I also have a male name, which makes initial introductions interesting. (“Oh, I thought you would be a man...”) If it matters, in addition to leading an R&D technical team at work, I’m on [the board of a computer engineering department], and a startup advisor [for a prominent venture capital firm].

I have a lot to say about this article. On one side, I am burned out on the “women in tech” topic. I want to be included/recruited because I totally kill it and always bring my A-game—and never ever ever because I am a woman.

Read On »

When Fingerprints Are as Easy to Steal as Passwords
March 24th, 2017, 04:11 PM

How do you prove who you are to a computer?

You could just use a password, a shared secret between you and the machine. But passwords are easily compromised—through a phishing scam, or a data breach, or some good old-fashioned social engineering—making it simple to impersonate you.

Today, you’re often asked to produce something more fundamental and harder to imitate than a password: something that you are rather than something that you know. Your fingerprint, for instance, can get you into a smartphone, a laptop, and a bank account. Like other biometric data, your fingerprints are unique to you, so when the ridges of your thumb come in contact with a reader, the computer knows you’re the one trying to get in.

Your thumb is less likely to wander off than a password, but that doesn’t mean it’s a foolproof marker of your identity. In 2014, hackers working for the Chinese government broke into computer systems at the Office of Personnel Management and made off with sensitive personal data about more than 22 million Americans—data that included the fingerprints of 5.6 million people.

That data doesn’t appear to have surfaced on the black market yet, but if it’s ever sold or leaked, it could easily be used against the victims. Last year, a pair of researchers at Michigan State University used an inkjet printer and special paper to convert high-quality fingerprint scans into fake, 3-D fingerprints that fooled smartphone fingerprint readers—all with equipment that cost less than $500.

In the absence of a state-sponsored cyberattack, there are other ways to glean someone’s fingerprint. Researchers at Tokyo’s National Institute of Informatics were able to reconstruct a fingerprint based off of a photo of a person flashing a peace sign taken from nine feet away. “Once you share them on social media, then they’re gone,” Isao Echizen told the Financial Times.

Face-shape data is susceptible to hacking, too. A study at Georgetown University found that images of a full 50 percent of Americans are in at least one police facial-recognition database, whether it’s their drivers’ license photo or a mugshot. But a hacker wouldn’t necessarily need to break into one of those databases to harvest pictures of faces—photos can be downloaded from Facebook or Google Images, or even captured on the street.

And that data can be weaponized, just like a fingerprint: Last year, researchers from the University of North Carolina built a 3D model of a person’s head using his Facebook photos, creating a moving, lifelike animation that was convincing enough to trick four of five facial-recognition tools they tested.

The fundamental trouble with biometrics is that they can’t be reset. If the pattern of one of your fingerprints is compromised, that’s fine; you have a few backups. But if they’re all gone—some law-enforcement databases contain images of all ten fingers—getting them replaced isn’t an option. The same goes for eyes, which are used for iris or retina scans, and your face. Unlike a compromised password, these things can’t be changed without unpleasant surgery or mutilation.

“If Border Patrol and your bank and your phone all are collecting your fingerprint data, all it takes is one actor who figures out how to manipulate that and you’ve basically wiped out the usefulness of that information,” said Betsy Cooper, the director of the Center for Long-Term Cybersecurity at the University of California, Berkeley.

What’s more, fingerprints and face shape, the two most widely used forms of biometric identification, stay quite stable over time. A study of automatic face-recognition systems from Michigan State’s Biometrics Research Group examined nearly 150,000 mugshots from 18,000 criminals, with at least 5 years between the first and last photo. The researchers found that one off-the-shelf software package was still 98 percent accurate when matching a subject’s photo to one taken 10 years prior. There’s even a field of research that studies how facial software can recognize the same face before and after plastic surgery.

The same Michigan State lab found that fingerprint patterns stay consistent over time, too. This time, the study examined a database of fingerprints from more than 15,000 people who were arrested by Michigan State Police over the span of five years. The results showed that for practical purposes, a 12-year-old fingerprint could be matched with an original, with nearly 100 percent accuracy. In another experiment, the group found that children’s fingerprints begin to stabilize at about one year of age, and remain of sufficient quality to identify them for at least a year.

(Not all biometric identifiers remain constant: Pregnancy can alter the blood-vessel patterns in women’s retinas, for example, confusing retinal scanners.)

To overcome the security risk of static fingerprints, irises, and face shapes, some research has turned to the development of changeable biometrics.

In 2013, a team of Berkeley researchers came up with a futuristic system called “passthoughts.” The technique combines three factors: something you know (a thought), something you are (your brain patterns), and something you have (an EEG sensor for measuring brainwaves). To authenticate with a passthought, you think your secret key while wearing the sensor. The key can be just about anything: a song, a phrase, a mental image. The thought itself is never transmitted—just a mathematical representation of the electric signals your brain makes while thinking it.

If someone else were to figure out exactly what you were thinking, they couldn’t impersonate your passthought, because every person thinks the same thought differently. A hacker might be able to defeat the system by using a phishing scheme: by tricking you into thinking your passthought, capturing the output, and later replaying it back to an authentication system to trick it. But you wouldn’t be compromised forever. You can just change your passthought.

Cooper says researchers are exploring ways to use a CRISPR-like system to embed alterable encryption keys into DNA, too. With changeable biometrics that use brainwaves or genetics, you’d have a way to prove you’re you, even if each of your fingerprints has been compromised ten times over.

Artificial Intelligence: The Park Rangers of the Anthropocene
March 24th, 2017, 04:11 PM

In Australia, autonomous killer robots are set to invade the Great Barrier Reef. Their target is the crown-of-thorns starfish—a malevolent pincushion with a voracious appetite for corals. To protect ailing reefs, divers often cull the starfish by injecting them with bile or vinegar. But a team of Australian scientists has developed intelligent underwater robots called COTSBots that can do the same thing. The yellow bots have learned to identify the starfish among the coral, and can execute them by lethal injection.

These robots probably aren’t going to be the saviors of the reef, but that’s not the point. It’s the approach that matters. The work of conservationists typically involves reducing human influence: breeding the species we’ve killed, killing the species we’ve introduced, removing the pollutants we’ve added, and so on. But all of these measures involve human action—some, intensively so. The COTSBots are different: They’re of us, but designed to ultimately operate without us. They represent a burgeoning movement to remove human influence from conservation—to save wild ecosystems by taking us out of the picture entirely.

In an intriguing thought experiment, landscape architect Bradley Cantrell, historian Laura Martin, and ecologist Erle Ellis have taken this ethos to its logical extreme, and ended up with what they call a “wildness creator”—a hypothetical artificial intelligence that would autonomously protect wild spaces. We’d create it, obviously, but then let it go, so it would develop its own strategies for protecting nature. Maybe it blocks out human-made light or noise. Maybe it redirects the flow of water or destroys litter. Maybe it deploys drones to cull invasive species. Think Skynet crossed with Captain Planet, or the Matrix meets Ranger Rick, or IBM’s Watson meets Greenpeace.

Cantrell, Martin, and Ellis have presented their ideas in a provocative new paper called “Designing Autonomy: Opportunities for New Wildness in the Anthropocene.” To be clear, they’re not remotely saying that “it will ever be technologically, financially, or politically possible to develop and install autonomous wildness creators at meaningful scales.” They’re not even recommending it. “That’s not the direction I want to see us going,” says Cantrell. “The paper has a tongue-in-cheek aspect. We make this proposition and immediately pull back.”

So, then: why?

Because exploring hypothetical futures tell us a lot about the concerns of the present. That’s science-fiction in a nutshell. Ex Machina, System Shock, and Neuromancer aren’t how-to manuals; in their visions of robotic rebellion, they reflect our fears about our own fallibilities. So what happens when we speculate about AI going green instead of going rogue? That tells us something about how the ethical questions that pervade modern conservation, about how we see our role in protecting our remaining wilderness, and about what “wild” even means.

“When people try to maintain natural places, there’s a tendency to end up over-curating them,” says Ellis. “So even with the best intentions, everything ends up conforming to what human cultures decide is important.” For example, my colleague Ross Andersen recently wrote about an ambitious and possibly quixotic plan to re-wild the Siberian steppes with resurrected woolly mammoths. Those large beasts once roamed there, sure, but the architects of this plan have made a judgment call about what those now mammoth-less plains should be like. The same goes for the U.S.’s decision to reintroduce wolves to Yellowstone in the 1990s, or New Zealand’s plan to kill all rats on the island by 2050, or the starfish-murdering COTSBot. This is a perfect example of possible over-curation, says Ellis, because the crown-of-thorns starfish isn’t even an invasive species—it’s a native one that occasionally goes through population outbreaks. “The idea that you’re going to automatically kill a lot of animals in the name of “protecting nature” is a little disturbing,” he says.

“These interventions have been inherently controversial,” says Martin. “There’s already such an effort to present those decisions about which species get to live in a landscape and which do not as purely technical, when in reality, it’s very social and political.” Even when we’re trying to remove our influence, we’re stamping our humanity onto things.

But what if humans weren’t running the show? Artificial intelligence has progressed to a point where machines are capable of developing their own behavior, going beyond their original programs. When Google’s AlphaGo system recently beat the world’s best Go players, it did so with unconventional strategies, and moves that no human would ever have made. “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” said reigning human champion Ke Jie.

“That’s the interesting thing about bringing in an autonomous learning system that can come up with its own rules,” says Ellis, who wonders what such artificial enlightenment look like when applied to conservation. “Maybe there are ways of doing things that have not occurred to us, and would not, but could emerge from a learning process that isn’t human.”

Several projects are already moving in this direction. At Harvard University, Cantrell’s team is designing prototypes for intelligently controlling river systems. People are good at diverting or barricading rivers with levees, dams, and flood barriers. But Cantrell imagines something more dynamic—sensors and structures that could more subtly manipulate the flow of water and the pattern of sediment to “protect against flooding, but also sculpt the land in advance of the next flood,” he says. The idea isn’t to “build one solution, but something that constantly updates itself.”

Meanwhile, other groups are developing drones that can plant trees, artificial pollinators, swarms of oceanic vehicles for cleaning up oil spills, or an autonomous, weed-punching farm-bot. Geoengineering—big attempts to counter climate change by manipulating the environment—is also a conceptual predecessor to a wildness creator. It’s a way of reshaping ecosystems by introducing something new and letting it run, by changing then relinquishing. Re-wilding projects like the Russian mammoth quest, where scientists introduce long-lost megafauna, are also similar. “You’re replacing a species that had a lot of control over its ecosystem—and it’s not human control,” says Ellis. “Our wilderness creator idea is just intensive re-wilding.”  

“The publication of a paper on the use of AI on conservation would have been hard to imagine five years ago, but we can now read it in one of the top journals in ecology,” says Eric Higgs, who studies ecology and philosophy at the University of Victoria. “It’s testament to the fact that we’re looking for new ways of addressing rapid change.” But he adds that conservationists have learned the hard way that protecting nature is only possible if people are invested in caring for their land or protecting local animals. “That human engagement piece has really jumped out as being very important,” Higgs says, and the wildness creator concept “is a denial of that.”

And in that denial, the concept reflects many of the tensions that underlie modern conservation. “The way we think of conservation is typically to right the wrongs of humans in the environment,” says Cantrell. “We’re cordoning off portions of the Earth to protect it from our influence, or trying to turn back that landscape. And if we take technological solutions down that same line of thought, we get to a point where we’re heavily managing ecosystems just to take the humanity out of them.”

The idea of fully removing ourselves from nature is unachievable. It’s the Anthropocene and humans are here to stay. “Instead, we should be thinking critically and carefully about how to co-exist with other species,” says Martin. And AI, while not supplanting that responsibility, can help us to exercise it. “There are so many technological utopians who are envisioning how tech can improve the lives of humans. Diverting some of that energy to promoting the lives of non-humans would be a worthwhile endeavor.”

How Aristotle Created the Computer
March 23rd, 2017, 04:11 PM

THE HISTORY Of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Listen to the audio version of this article:Download the Audm app for your iPhone to listen to more titles.

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.

A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.

Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.

Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.

Boole is often described as a mathematician, but he saw himself as a philosopher, following in the footsteps of Aristotle. The Laws of Thought begins with a description of his goals, to investigate the fundamental laws of the operation of the human mind:

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic ... and, finally, to collect ... some probable intimations concerning the nature and constitution of the human mind.

He then pays tribute to Aristotle, the inventor of logic, and the primary influence on his own work:

In its ancient and scholastic form, indeed, the subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece in the partly technical, partly metaphysical disquisitions of The Organon, such, with scarcely any essential change, it has continued to the present day.

Trying to improve on the logical work of Aristotle was an intellectually daring move. Aristotle’s logic, presented in his six-part book The Organon, occupied a central place in the scholarly canon for more than 2,000 years. It was widely believed that Aristotle had written almost all there was to say on the topic. The great philosopher Immanuel Kant commented that, since Aristotle, logic had been “unable to take a single step forward, and therefore seems to all appearance to be finished and complete.”

Aristotle’s central observation was that arguments were valid or not based on their logical structure, independent of the non-logical words involved. The most famous argument schema he discussed is known as the syllogism:

  • All men are mortal.
  • Socrates is a man.
  • Therefore, Socrates is mortal.

You can replace “Socrates” with any other object, and “mortal” with any other predicate, and the argument remains valid. The validity of the argument is determined solely by the logical structure. The logical words — “all,” “is,” are,” and “therefore” — are doing all the work.

Aristotle also defined a set of basic axioms from which he derived the rest of his logical system:

  • An object is what it is (Law of Identity)
  • No statement can be both true and false (Law of Non-contradiction)
  • Every statement is either true or false (Law of the Excluded Middle)

These axioms weren’t meant to describe how people actually think (that would be the realm of psychology), but how an idealized, perfectly rational person ought to think.

Aristotle’s axiomatic method influenced an even more famous book, Euclid’s Elements, which is estimated to be second only to the Bible in the number of editions printed.

A fragment of the Elements (Wikimedia Commons)

Although ostensibly about geometry, the Elements became a standard textbook for teaching rigorous deductive reasoning. (Abraham Lincoln once said that he learned sound legal argumentation from studying Euclid.) In Euclid’s system, geometric ideas were represented as spatial diagrams. Geometry continued to be practiced this way until René Descartes, in the 1630s, showed that geometry could instead be represented as formulas. His Discourse on Method was the first mathematics text in the West to popularize what is now standard algebraic notation — x, y, z for variables, a, b, c for known quantities, and so on.

Descartes’s algebra allowed mathematicians to move beyond spatial intuitions to manipulate symbols using precisely defined formal rules. This shifted the dominant mode of mathematics from diagrams to formulas, leading to, among other things, the development of calculus, invented roughly 30 years after Descartes by, independently, Isaac Newton and Gottfried Leibniz.

Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:

All men are mortal.

Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:

x = x * y

Which could be interpreted as “Everything in the set x is also in the set y.”

The Laws of Thought created a new scholarly field—mathematical logic—which in the following years became one of the most active areas of research for mathematicians and philosophers. Bertrand Russell called the Laws of Thought “the work in which pure mathematics was discovered.”

Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.”

He showed the correspondence between electrical circuits and Boolean operations in a simple chart:

https://cdn-images-1.medium.com/max/800/0*K9_VBhOT_82AKdAL.
Shannon’s mapping from electrical circuits to symbolic logic (University of Virginia)

This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.

https://cdn-images-1.medium.com/max/800/0*OUD0N1RzLXZK8nLj.
Shannon’s adder circuit (University of Virginia)

By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.

Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)

Since Shannon’s paper, a vast amount of progress has been made on the physical layer of computers, including the invention of the transistor in 1947 by William Shockley and his colleagues at Bell Labs. Transistors are dramatically improved versions of Shannon’s electrical relays — the best known way to physically encode Boolean operations. Over the next 70 years, the semiconductor industry packed more and more transistors into smaller spaces. A 2016 iPhone has about 3.3 billion transistors, each one a “relay switch” like those pictured in Shannon’s diagrams.

While Shannon showed how to map logic onto the physical world, Turing showed how to design computers in the language of mathematical logic. When Turing wrote his paper, in 1936, he was trying to solve “the decision problem,” first identified by the mathematician David Hilbert, who asked whether there was an algorithm that could determine whether an arbitrary mathematical statement is true or false. In contrast to Shannon’s paper, Turing’s paper is highly technical. Its primary historical significance lies not in its answer to the decision problem,  but in the template for computer design it provided along the way.

Turing was working in a tradition stretching back to Gottfried Leibniz, the philosophical giant who developed calculus independently of Newton. Among Leibniz’s many contributions to modern thought, one of the most intriguing was the idea of a new language he called the “universal characteristic” that, he imagined, could represent all possible mathematical and scientific knowledge. Inspired in part by the 13th-century religious philosopher Ramon Llull, Leibniz postulated that the language would be ideographic like Egyptian hieroglyphics, except characters would correspond to “atomic” concepts of math and science. He argued this language would give humankind an “instrument” that could enhance human reason “to a far greater extent than optical instruments” like the microscope and telescope.

He also imagined a machine that could process the language, which he called the calculus ratiocinator.

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Calculemus—Let us calculate.

Leibniz didn’t get the opportunity to develop his universal language or the corresponding machine (although he did invent a relatively simple calculating machine, the stepped reckoner). The first credible attempt to realize Leibniz’s dream came in 1879, when the German philosopher Gottlob Frege published his landmark logic treatise Begriffsschrift. Inspired by Boole’s attempt to improve Aristotle’s logic, Frege developed a much more advanced logical system. The logic taught in philosophy and computer-science classes today—first-order or predicate logic—is only a slight modification of Frege’s system.

Frege is generally considered one of the most important philosophers of the 19th century. Among other things, he is credited with catalyzing what noted philosopher Richard Rorty called the “linguistic turn” in philosophy. As Enlightenment philosophy was obsessed with questions of knowledge, philosophy after Frege became obsessed with questions of language. His disciples included two of the most important philosophers of the 20th century—Bertrand Russell and Ludwig Wittgenstein.

The major innovation of Frege’s logic is that it much more accurately represented the logical structure of ordinary language. Among other things, Frege was the first to use quantifiers (“for every,” “there exists”) and to separate objects from predicates. He was also the first to develop what today are fundamental concepts in computer science like recursive functions and variables with scope and binding.

Frege’s formal language — what he called his “concept-script” — is made up of meaningless symbols that are manipulated by well-defined rules. The language is only given meaning by an interpretation, which is specified separately (this distinction would later come to be called syntax versus semantics). This turned logic into what the eminent computer scientists Allan Newell and Herbert Simon called “the symbol game,” “played with meaningless tokens according to certain purely syntactic rules.”

All meaning had been purged. One had a mechanical system about which various things could be proved. Thus progress was first made by walking away from all that seemed relevant to meaning and human symbols.

As Bertrand Russell famously quipped: “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”

An unexpected consequence of Frege’s work was the discovery of weaknesses in the foundations of mathematics. For example, Euclid’s Elements — considered the gold standard of logical rigor for thousands of years — turned out to be full of logical mistakes. Because Euclid used ordinary words like “line” and “point,” he — and centuries of readers — deceived themselves into making assumptions about sentences that contained those words. To give one relatively simple example, in ordinary usage, the word “line” implies that if you are given three distinct points on a line, one point must be between the other two. But when you define “line” using formal logic, it turns out “between-ness” also needs to be defined—something Euclid overlooked. Formal logic makes gaps like this easy to spot.

This realization created a crisis in the foundation of mathematics. If the Elements — the bible of mathematics — contained logical mistakes, what other fields of mathematics did too? What about sciences like physics that were built on top of mathematics?

The good news is that the same logical methods used to uncover these errors could also be used to correct them. Mathematicians started rebuilding the foundations of mathematics from the bottom up. In 1889, Giuseppe Peano developed axioms for arithmetic, and in 1899, David Hilbert did the same for geometry. Hilbert also outlined a program to formalize the remainder of mathematics, with specific requirements that any such attempt should satisfy, including:

  • Completeness: There should be a proof that all true mathematical statements can be proved in the formal system.
  • Decidability: There should be an algorithm for deciding the truth or falsity of any mathematical statement. (This is the “Entscheidungsproblem” or “decision problem” referenced in Turing’s paper.)

Rebuilding mathematics in a way that satisfied these requirements became known as Hilbert’s program. Up through the 1930s, this was the focus of a core group of logicians including Hilbert, Russell, Kurt Gödel, John Von Neumann, Alonzo Church, and, of course, Alan Turing.

Hilbert’s program proceeded on at least two fronts. On the first front, logicians created logical systems that tried to prove Hilbert’s requirements either satisfiable or not.

On the second front, mathematicians used logical concepts to rebuild classical mathematics. For example, Peano’s system for arithmetic starts with a simple function called the successor function which increases any number by one. He uses the successor function to recursively define addition, uses addition to recursively define multiplication, and so on, until all the operations of number theory are defined. He then uses those definitions, along with formal logic, to prove theorems about arithmetic.

The historian Thomas Kuhn once observed that “in science, novelty emerges only with difficulty.” Logic in the era of Hilbert’s program was a tumultuous process of creation and destruction. One logician would build up an elaborate system and another would tear it down.

The favored tool of destruction was the construction of self-referential, paradoxical statements that showed the axioms from which they were derived to be inconsistent. A simple form of this  “liar’s paradox” is the sentence:

This sentence is false.

If it is true then it is false, and if it is false then it is true, leading to an endless loop of self-contradiction.

Russell made the first notable use of the liar’s paradox in mathematical logic. He showed that Frege’s system allowed self-contradicting sets to be derived:

Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves.

This became known as Russell’s paradox and was seen as a serious flaw in Frege’s achievement. (Frege himself was shocked by this discovery. He replied to Russell: “Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build my arithmetic.”)

Russell and his colleague Alfred North Whitehead put forth the most ambitious attempt to complete Hilbert’s program with the Principia Mathematica, published in three volumes between 1910 and 1913. The Principia’s method was so detailed that it took over 300 pages to get to the proof that 1+1=2.

Russell and Whitehead tried to resolve Frege’s paradox by introducing what they called type theory. The idea was to partition formal languages into multiple levels or types. Each level could make reference to levels below, but not to their own or higher levels. This resolved self-referential paradoxes by, in effect, banning self-reference. (This solution was not popular with logicians, but it did influence computer science — most modern computer languages have features inspired by type theory.)

Self-referential paradoxes ultimately showed that Hilbert’s program could never be successful. The first blow came in 1931, when Gödel published his now famous incompleteness theorem, which proved that any consistent logical system powerful enough to encompass arithmetic must also contain statements that are true but cannot be proven to be true. (Gödel’s incompleteness theorem is one of the few logical results that has been broadly popularized, thanks to books like Gödel, Escher, Bach and The Emperor’s New Mind).

The final blow came when Turing and Alonzo Church independently proved that no algorithm could exist that determined whether an arbitrary mathematical statement was true or false. (Church did this by inventing an entirely different system called the lambda calculus, which would later inspire computer languages like Lisp.) The answer to the decision problem was negative.

Turing’s key insight came in the first section of his famous 1936 paper, “On Computable Numbers, With an Application to the Entscheidungsproblem.” In order to rigorously formulate the decision problem (the “Entscheidungsproblem”), Turing first created a mathematical model of what it means to be a computer (today, machines that fit this model are known as “universal Turing machines”). As the logician Martin Davis describes it:

Turing knew that an algorithm is typically specified by a list of rules that a person can follow in a precise mechanical manner, like a recipe in a cookbook. He was able to show that such a person could be limited to a few extremely simple basic actions without changing the final outcome of the computation.

Then, by proving that no machine performing only those basic actions could determine whether or not a given proposed conclusion follows from given premises using Frege’s rules, he was able to conclude that no algorithm for the Entscheidungsproblem exists.

As a byproduct, he found a mathematical model of an all-purpose computing machine.

Next, Turing showed how a program could be stored inside a computer alongside the data upon which it operates. In today’s vocabulary, we’d say that he invented the “stored-program” architecture that underlies most modern computers:

Before Turing, the general supposition was that in dealing with such machines the three categories — machine, program, and data — were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion.

This was the first rigorous demonstration that any computing logic that could be encoded in hardware could also be encoded in software. The architecture Turing described was later dubbed the “Von Neumann architecture” — but modern historians generally agree it came from Turing, as, apparently, did Von Neumann himself.

Although, on a technical level, Hilbert’s program was a failure, the efforts along the way demonstrated that large swaths of mathematics could be constructed from logic. And after Shannon and Turing’s insights—showing the connections between electronics, logic and computing—it was now possible to export this new conceptual machinery over to computer design.

During World War II, this theoretical work was put into practice, when government labs conscripted a number of elite logicians. Von Neumann joined the atomic bomb project at Los Alamos, where he worked on computer design to support physics research. In 1945, he wrote the specification of the EDVAC—the first stored-program, logic-based computer—which is generally considered the definitive source guide for modern computer design.

Turing joined a secret unit at Bletchley Park, northwest of London, where he helped design computers that were instrumental in breaking German codes. His most enduring contribution to practical computer design was his specification of the ACE, or Automatic Computing Engine.

As the first computers to be based on Boolean logic and stored-program architectures, the ACE and the EDVAC were similar in many ways. But they also had interesting differences, some of which foreshadowed modern debates in computer design. Von Neumann’s favored designs were similar to modern CISC (“complex”) processors, baking rich functionality into hardware. Turing’s design was more like modern RISC (“reduced”) processors, minimizing hardware complexity and pushing more work to software.

Von Neumann thought computer programming would be a tedious, clerical job. Turing, by contrast, said computer programming “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”

Since the 1940s, computer programming has become significantly more sophisticated. One thing that hasn’t changed is that it still primarily consists of programmers specifying rules for computers to follow. In philosophical terms, we’d say that computer programming has followed in the tradition of deductive logic, the branch of logic discussed above, which deals with the manipulation of symbols according to formal rules.

In the past decade or so, programming has started to change with the growing popularity of machine learning, which involves creating frameworks for machines to learn via statistical inference. This has brought programming closer to the other main branch of logic, inductive logic, which deals with inferring rules from specific instances.

Today’s most promising machine learning techniques use neural networks, which were first invented in 1940s by Warren McCulloch and Walter Pitts, whose idea was to develop a calculus for neurons that could, like Boolean logic, be used to construct computer circuits. Neural networks remained esoteric until decades later when they were combined with statistical techniques, which allowed them to improve as they were fed more data. Recently, as computers have become increasingly adept at handling large data sets, these techniques have produced remarkable results. Programming in the future will likely mean exposing neural networks to the world and letting them learn.

This would be a fitting second act to the story of computers. Logic began as a way to understand the laws of thought. It then helped create machines that could reason according to the rules of deductive logic. Today, deductive and inductive logic are being combined to create machines that both reason and learn. What began, in Boole’s words, with an investigation “concerning the nature and constitution of the human mind,” could result in the creation of new minds—artificial minds—that might someday match or even exceed our own.

How the Diving Bell Opened the Ocean's Depths
March 23rd, 2017, 04:11 PM

Imagine sitting on a narrow bench inside a dark room. Your feet are dangling into a floor of water. You’re vaguely aware of the room moving. Your ears start ringing. If you move too much, you feel the room sway, which could bring the floor rushing in to fill it. You take a breath and dive down, swim outside the room, groping the water, looking for its bottom, reaching for something valuable enough to take back with you.

If you’ve ever pushed an upside-down cup into water, reached inside, and found it still empty, you’ve encountered a diving bell. It’s a simple concept: The water’s pressure forces the air, which has nowhere else to go, inside the “bell.” Once people realized that trapped air contains breathable oxygen, they took large pots, stuck their heads inside, and jumped into the nearest body of water. In the 2,500 years since, the device has been refined and expanded to allow better access to the ocean’s depths. But that access has not come without human cost.

* * *

The first account of diving bells comes from Aristotle in the 4th century B.C.E. Legend has it Aristotle’s pupil Alexander the Great went on to build “a very fine barrel made entirely of white glass” and used it in the Siege of Tyre in 332 B.C.E. However, the facts of Alexander the Great’s adventures come mostly from depictions in fragments of ancient art and literature, which render him as a demigod who conquered the darkness and returned to the dry realm of historians and poets.

Prior to the diving bell, the wet parts of the earth were places people could move atop but not transit within. Shallow diving was possible: Duck hunters in ancient Egypt and swimmers in Rome and Greece used hollowed-out reeds or plant stems as snorkels. But they were still surface-bound, barely deeper than the reflections of the sky on the water above.

The diving bell changed that. Figuring out how to stay underwater was a turning point in not only naval technology, but also in science and adaptation. The diving bell acted as a portable atmosphere, allowing divers to descend a dozen feet or so, briefly leave the bell, return to it for air, and then return to the surface and start all over once they had filled their home base with too much carbon dioxide.

Staying submerged began as a simple trick, a novelty meant mostly for spectacle. But like most human exploration, the underwater landscape became appealing for its latent revenue opportunities. At first, diving bells appear to have been most heavily used in the pearl and sponge industries. Then, in 1531, the Italian inventor Guglielmo de Lorena came up with a new application. Using slings to attach a bell to his body, he could collect treasure from capsized Roman ships. After the defeat of the Spanish Armada in 1588, according to Francis Bacon, Spanish prisoners spread the word that their captors’ riches had sunk off the coast of Scotland; industrious divers used bells to pick up the scraps.

Seeing the technology as a business opportunity, scientists and inventors made improvements to a concept that had shown virtually no change in two millennia. A renaissance era for the diving apparatus commenced. The German painter and alchemist Franz Kessler, the Swedish colonel Hans Albrecht von Treilebe, the Massachusetts Bay Colony governor Sir William Phips (best known today for the Salem witch trials), the French priest Abbe Jean de Hautefeuille, the French physicist Dennis Papin, and the British super-scientist Edmund Halley (of Halley’s Comet fame) all made contributions to diving bell technology in the 17th century—all in the interest of collecting valuables no one else could reach. Phips went as far as modern-day Haiti to chase sunken treasure.

The most important of these contributions may have come from de Hautefeuille, who in 1681 wrote that diving deeper alters the atmospheric pressure of the air available to a diver. Pressure was the key to more sustainable expeditions, it turned out. Halley then developed a complex system of weighted air barrels, hoses, and valves to keep a relatively stable level of oxygen and pressure inside his lead-reinforced wooden bell design.

But increasing the pressure inside the bell posed a problem. It also kept the water level down as the bell descended, and thereby pressurized the bell’s inhabitants, occasionally bursting divers’ ear drums. Using faucets to adjust the pressure inside the bell and sending barrels back-and-forth to the surface to replenish his air supply, Halley was able to spend well over an hour 60 feet below the surface, though he did complain that his ears felt “as if a quill had been thrust into them.”

Casualties became a theme. In 1775, the Scottish confectioner Charles Spalding improved diving-bell safety with better balance weights, a pulley system that increased dive control, signal ropes leading to the surface, and even windows. Spalding and his nephew, Ebenezer Watson, used such diving bells for salvage work—until they both suffocated inside one off the coast of Ireland.

The final contribution to the Halley-style “wet” (partially-enclosed) bell was by Englishman John Smeaton more than a decade later. Smeaton’s bell maintained the air supply by connecting a hose to a pump above the surface. This design enabled laborers to fix the foundation of England’s Hexham Bridge. But it led to lower-class caisson workers coming down with what they called “caisson sickness,” as Smeaton’s bell became ubiquitous in harbors throughout the world. Now known as the bends or decompression sickness, caisson sickness sometimes caused surfaced divers to have strokes, leading to paralysis and even death. Workers would come back to the dry world and pray they didn’t mysteriously take ill.

It wouldn’t be until nearly 1900 that scientists began to master the effects of pressure on the human body. Eventually, the wet bell gave way to modern, completely enclosed “dry” bells, which were really just pressurized diving chambers. By the mid-20th century, these sophisticated diving bells aided the booming offshore oil industry—fuel awaited human discovery in the deep, next to shipwrecks, sponges, and pearls.

* * *

Alexander the Great reportedly claimed that while submerged, he saw a monster so massive it took it three full days to swim past him. It would have been physically impossible to survive in his bell that long, of course, but the story makes good legend: The ocean offers a void big enough to contain human metaphor and myth, an emptiness vast enough to consume a three-day-long behemoth—or to swallow the continent of Atlantis (as Plato claimed), or the whole Earth itself (as Noah’s God commanded), or to hide the Missing Link on the lost island of Lemuria, or to conceal countless missing vessels in the Bermuda Triangle.

It’s no coincidence that the psychoanalyst Carl Jung chose Proteus, the shape-shifting Greek god of water who tells the future to whoever can catch him, as a manifestation of the unconscious, that great dark sea in the mind. The ocean represents the unknown. For thousands of years, it marked the portion of Earth people could never access. It was a place conquerable only by God, whom Isaiah addressed, “Art thou not it that dried up the sea, the waters of the great deep; that made the depths of the sea a way for the redeemed to pass over?”

Imagine again: Your arms are full of something heavy you imagine to be precious, and you spend the last of your breath kicking, ascending back to the small room where you hope there’s still enough air to breathe, and then enough to make it to the surface, where you’ll wait to find out whether you’ll become one of the sick ones. Someone will pay you, and maybe it’ll be enough. To go underwater always challenges humanity’s natural place; to strive to stay there is to defy our given position on the earth. But humans will persist, because still so little is known about what lurks deep in the ocean, and because discovering it is worth the trial of pursuit.


This article appears courtesy of Object Lessons.

The Video Game That Claims Everything Is Connected
March 23rd, 2017, 04:11 PM

I am Rocky Mountain elk. I somersault forward through the grass, toward a tower of some sort. Now I am that: Industrial Smoke Stack. I press another button and move a cursor to become Giant Sequoia. I zoom out again, and I am Rock Planet, small and gray. Soon I am Sun, and then I am Lenticular Galaxy. Things seem a little too ordinary, so I pull up a menu and transform my galaxy into a Woolly Mammoth. With another button I multiply them. I am mammoths, in the vacuum of space.

There are others, too. Hydrogen atom. Taco truck. Palomino horse, spruce, fast-food restaurant, hot-air balloon. Camel, planetary system, Higgs boson, orca. Bacteriophage, poppy, match, pagoda, dirt chunk, oil rig. These are some of the things I got to be in Everything, a new video game by the animator and game designer David OReilly.

It may sound strange. What does it mean to be a fast-food restaurant or a Higgs boson? That’s the question the game poses and, to some extent, answers. In the process, it tumbles the player through galaxies, planets, continents, brush, subatomic abstractions, and a whole lot of Buddhist mysticism. The result turns a video-game console into an unlikely platform for metaphysical experimentation.

* * *

In retrospect, OReilly’s last game was a warm-up for this one. Called Mountain, the game depicted a mountain, disembodied in space, at which worldly miscellany hurtled and sometimes stuck. Eventually, after 50 hours or more, the mountain, rather than the human, quit playing and departed.

When I wrote about Mountain upon its release in 2014, it was easy to find a hook. OReilly had produced several esoteric, animated short films, but he was best-known for designing the animations for the “alien child” video-game sequence in Spike Jonze’s Her, a film about a man’s relationship with an artificial intelligence that eventually reaches transcendence and leaves him. Her, I argued, was Hollywood taking the easy way out with alien love. Scarlett Johansson’s Samantha was just a human left unseen. Mountain offered a bolder invitation: to commune with a representation of an inanimate, aggregate object rather than a living, individual one.

Against all odds, Mountain was a commercial success. It cost $1, and it did well enough to allow OReilly to self-fund the development of Everything. That’s a big bet, but OReilly feels palpable glee from having taken it. “Money lacks the ability to look forward,” he tells me, reflecting on his difficulty parlaying short-film festival success into paying work. He sells the risk as a moral imperative. “I could have done a commercial thing or gotten a mortgage,” he explains, “but I felt a responsibility to go deeper.”

Three years later, Everything certainly goes deeper. The game sports thousands of unique, playable things, promising players that anything they can see, they can be. To “be” something in Everything means binding to and taking control of it. Once accomplished, the player can pilot that object around Everything’s vast, multi-level 3-D world. Rolling the boulder over to a Montgomery palm allows the player to ascend further, as I did up into the galaxy from the Rocky Mountain elk. The game also allows downward progress: descending from planet, to continent, to kelp forest, and then orca, then plankton, then fungus, then atom. And further, too, until discovering that, according to Everything, the tiniest of things in one dimension might just be the biggest ones in another.

Everything’s tagline promises that everything you can see, you can be, which has led some to conclude that the game is a “universe simulator,” along the lines of No Man’s Sky. But Everything isn’t a universe simulator. You can’t be anything in Everything, and anyone with that aspiration will leave disappointed. After bonding with a fast-food restaurant, players can’t descend into it to discover booths, ceramic floor tiles, low-wage workers, hamburger patties, or the fragments of spent straw sheaths like they can with galaxies and continents and shrubs.

Only a fool would try to make a game that contains everything—or think that it would be possible to play one. A game containing everything in the universe would be coextensive with the universe. We’re already playing that game, it turns out. But that fact is hard to see. Everything helps a little, by reminding people of the things that coexist both alongside and very far away from them.

* * *

Everything’s take on the matter comes at a cost. In Mountain, the nature of a mountain was easier to imply, even if as a caricature, in a video game. Mountains are massive structures of rock and earth, formed and destroyed by tectonics and erosion. The timescale of a mountain makes a human being’s encounter with it evanescent. Fifty hours into Mountain, staring at the same mountain, the slowness of geological time feels palpable.

In Everything, everything feels more familiar, more human. The player moves things around, side to side and up and down, in the manner familiar to video games. This makes sense for beetles and cargo ships, but less so for redwood trees and office buildings—which disconnect from the ground and lumber along as if they were giraffes. The game tries to undermine its anthropomorphism by animating living creatures in a deliberately unfamiliar, awkward way. Mammals, their limbs fixed, execute somersaults rather than walking upright. The things in Everything also express existential angst, and with language, no less. “So many times I could have asked him out,” a lime wedge says, as I, VHS cassette, tumble past it. What would it mean for citrus to date, or to Tinder?

Such questions make more sense when considered alongside audio clips that the player can find throughout the game. They are excerpts from the lectures of the Alan Watts, the British-American philosopher-mystic who popularized Eastern philosophy in the West during the mid-20th century. He’s largely responsible for importing Buddhism to California in particular, where its rise had an important influence on the counter-culture movements of the 1960s—which together partly shaped the rise of the microcomputer in Silicon Valley.

Watts’s monologues are insidiously seductive. His voice imposes involuntary serenity, even among listeners (like me) who disagree with the ideas it conveys. The cadence and quality of the recordings also telegraph the period of their production: the cradle of mid-century prosperity, when certainty sounded more certain. Blending Zen and Vedanta with Freud and Heisenberg, Watts argued against the Western notion of the alienated self, separate from and at odds with its surroundings. Instead, he advocated a holistic conception of being, in which all entities in the cosmos are fundamentally interconnected, reliant, and compatible.

The recordings are extensively excerpted in the game, to “bind the ideas of the game to its structure,” as OReilly puts it. To do so required extensive negotiation with Watts’s estate, which has turned Watts’s lectures into a cottage industry for corporate licensing. OReilly appealed to Mark Watts, Alan’s son. Many owners of PlayStations are probably unfamiliar with Watts’s ideas, OReilly reasoned, despite their influence on the contemporary mindfulness trend they partly enabled. The two struck a deal: The video game and the library would partake in a delightful, unexpected cross promotion.

OReilly’s own view of the result is broad and unassuming. He does see a central claim in the game—“the world as subtracted from the idea of the self.” But OReilly also knows from Mountain—used as an object of mockery as much as a relaxation aid—that people use media for their own purposes, even if those purposes amount to making GIFs for their friends.

* * *

Even so, Everything yokes its horse too tightly to Watts’s cart. The concreteness of the philosopher’s voice and ideas risk overwhelming all other interpretations. And even without them, Everything’s narrative structure (yes, there is one) is textbook Watts. The player enters the universe with an anxious certainty about the role of the self. Over time, with practice, that player can let go of those attachments, free the mind, and reach enlightenment. At which point the real work of living—or playing—can commence.

For players prepared to adopt Watts’s take on existence, that’s not a problem. But others, including me, it’s hard to shake off Everything’s unwelcome claim that everything in the universe is connected, accessible, and familiar. To be a thing in Everything feels so much like being a person, or an avatar of one, that it undermines the separation OReilly so adeptly achieved in Mountain.  

When I eat bacon, or view zebras, or feel the breeze from a desktop fan, or ingest the hydrogen atoms bound to oxygen in a glass of water, I partake of those things only in part. Their fundamental nature remains utterly separate and different me, and from one another, too. I might be made of carbon and oxygen and hydrogen, but I can never really grasp what it is to be carbon. I might enter a fast-food restaurant, and I might even leave with bits of it inside me, but I can never fathom what it means to be a restaurant. The best I can do is to tousle the hair of that question, and establish the terms on which approximations might be possible.

I tried to play Everything with that attitude in mind, rather than Watts’s holism. And it obliged surprisingly well. For one part, the game puts man-made entities on the same footing as natural ones. Bacon and street lamps are no less or more valid avatars in Everything than are spruce trees or ice planets. This idea alone is enough to recommend the game, and to break the yoke of Alan Watts, whose version of Western Buddhism still bound too much to environmental naturalism.

And Everything offers a paradoxical salve to the anthropomorphism on which it also relies. When the rocks and the amoeba all have and express the same anxiety of death as people, as they do in Everything, they also draw attention to the fact that rocks and amoeba can’t possibly have that anxiety—at least not in the same way as you and me. In her book Vibrant Matter, the political scientist Jane Bennett has a tidy summary of this unexpected escape route from human self-centeredness:

Maybe it’s worth running the risks associated with anthropomorphizing … because it, oddly enough, works against anthropocentrism: a chord is struck between person and thing, and I am no longer above or outside a nonhuman “environment.”

Counterintuitively, by allowing things unlike people to pretend they are like us, the game helps drive home the fact that they are not.

For another part, Everything embraces an aesthetic of messiness rather than order. Things are in their place, to an extent: Descending into a continent unveils animals, fences, and farmhouses; rising into a solar system reveals planets and spacecraft. But the range and specificity of things in Everything spotlights the delightful and improbable diversity of existence. The universe contains bowling pins no less than quasars, articulated buses no less than cumulus clouds. The aesthetics of being isn’t a smooth flow of interconnectedness, as Alan Watts would have it. It’s a depraved bestiary whose pages share the ordinary with the preposterous with the divine.

There’s a lovely moment in Everything, just before the player reaches its version of awakening. A new thing appears in a curious murk. It’s a PlayStation, wired up to a television. The game displayed upon it is Everything, and the scene is the very one the player currently occupies. In a humble whisper, Everything admits that it is not everything, but only a video game by that name, full of things made from polygons, just pretending.

People play games—and read books, and listen to lectures—not to mistake their ideas for the world, but in order to find new ways to approach that world. This fact is so obvious that it seems stupid to observe it. And yet, video games—that medium of prurient adolescent fantasy at worst, and numbing, compulsive distraction at best—rarely try, or succeed, in doing so. Especially at the level of ideas so abstract as ontology, the study of being.

Perhaps this is Everything’s greatest accomplishment: a video game with a metaphysical position strong and coherent enough to warrant objection as much as embrace.

Becoming ‘Everyone’s Little Sister’ to Deal With Sexism
March 22nd, 2017, 04:11 PM

A reader with a Ph.D. in physics has been working in the tech industry for many years, but she’s struggled to cope with the huge gender imbalance at the start-ups she’s worked for. She feels she can’t fully be herself—or a mother:

When I entered the office for my interview, I saw every head in the glass-enclosed conference room pop up and look over at me. I’ve trained myself to have a sort of small, permanent smile plastered on my face, and I hoped, as the room was looking me over, that my smile looked natural, approachable, and genuine.

That is the persona I’ve settled on: Approachable and genuine. Everyone’s little sister.

In that way, I can inhabit a special place, still allowed to be feminine, someone everyone roots for but no one is sexually attracted to, or intellectually threatened by. Everyone wants his kid sister to win. Everyone will defend his little sister from bullies.

Sure, you may forget she is a girl; you may leave her out of some things because you forget about her; but you are not going forget her all together. And you certainly aren’t going to want your friends to sleep with her.

Read On »

What Happens If Uber Fails?
March 22nd, 2017, 04:11 PM

The thing about a market bubble is that you don’t really know how big it is until it pops. So it doesn’t pop, and doesn’t pop, and doesn’t pop, until one day it finally pops. And by then it’s too late.

The dot-com collapse two decades ago erased $5 trillion in investments. Ever since, people in Silicon Valley have tried to guess exactly when the next tech bubble will burst, and whether the latest wave of investment in tech startups will lead to an economic crash. “A lot of people who are smarter than me have come to the conclusion that we’re in a bubble,” said Rita McGrath, a professor of management at Columbia Business School. “What we’re starting to see is the early signals.”

Those signals include businesses closing or being acquired, venture capitalists making fewer investments, fewer companies going public, stocks that appear vastly overpriced, and startup valuations falling.

Then you have a company like Uber, valued at $70 billion despite massive losses, and beleaguered by one scandal after another. In 2017 alone Uber has experienced a widely publicized boycott that led to an estimated half-a-million canceled accounts, high-profile allegations of sexual harassment and intellectual property theft, a leaked video showing its CEO cursing at an Uber driver, a blockbuster New York Times scoop detailing the company’s secret program to trick law enforcement, and multiple senior leaders either resigning or being forced out.

“As someone trying to raise [venture capital] right now, I am very concerned that this is going to implode the entire industry,” one person wrote in a forum on the technology-focused website Hacker News earlier this week. It’s understandable that investors and entrepreneurs would be “watching this Uber situation unfold closely,” as Mike Isaac, the New York Times reporter, put it in a tweet about the Hacker News post. Especially at a time when rising interest rates give investors more options, and ostensibly make the highly valued pre-IPO companies like Uber less attractive.

But how much is the tech industry’s fate actually wrapped up in Uber’s? If Uber implodes, will the bubble finally pop? It’s a question that’s full of assumptions: Uber’s fate is uncertain, and nobody really knows what kind of bubble we’re in right now. Yet it’s a question still worth teasing apart. Trillions of dollars, thousands of jobs, and the future of technology all hang in the balance.

“These bubbles swing back and forth in fear and greed,” McGrath told me, “and when Uber stumbles, it triggers fear. Part of this bubble is created basically in a low-interest-rate environment. Money from all over the world is pouring into this sector because it has nowhere else to go.”

This is a key point—perhaps the key point that will determine whether Uber lives or dies. Uber isn’t worth $70 billion because it is actually worth $70 billion. Its valuation is that high despite the fact that it’s not profitable, and despite the fact that it has little protection from competitors baked into what it is and does. Uber’s valuation, in other words, is a reflection of the global marketplace and not a reflection of Uber’s own durability as a company.  

“To me, it’s a big question of whether they are going to be able to sustain the business model,” McGrath told me. “They have been very disruptive to incumbents, but there are no significant barriers to entry to their model. If you switch [services], you maybe have to re-enter your credit-card information and download a new app, but from there you’re good to go. There are pundits who say it’s only a matter of time.”

And then what? If Uber goes kablooey, what happens to all the other unicorns—the 187 startups valued at $1 billion or more apiece, according to the latest count by the venture capital database CB Insights?

Despite Uber’s influence, it’s unlikely that the company’s potential failure would set off too terrible of a chain reaction in Silicon Valley, several economists told me. “You need to make a distinction,” McGrath said, “between the startups that are really creating value and have something that will protect them in the event of imitation—versus the ones that are built on a lot of assumptions that really haven’t been tested yet, and money has been pouring into them because [it] have nowhere else to go.”

One instructive example is Theranos, the company known for its needle-free blood-testing technology. A few short years ago, it was roundly considered a Silicon Valley success story, valued at some $9 billion. Then, The Wall Street Journal revealed in a deeply investigated series of stories that the technology didn’t actually work as claimed—information that led to federal sanctions, lab closures, and ultimately Theranos’s announcement that it would leave the medical-testing business altogether. Theranos failed spectacularly, but it didn’t pop the bubble. So perhaps that’s a sign that the bubble isn’t going to pop all at once the way it did last time. The key is whether investors see a significant failure—like Theranos, and maybe Uber—as a one-off, or as a reflection of a systematic problem bubbling under the surface.

“One hypothesis could be that if a large pre-IPO tech company fails, then the source of capital for the others will start to shrink,” said Arun Sundararajan, a professor at New York University’s Stern School of Business. “That’s part of, I am sure, what happened during the dot-com bubble. But we are in a very different investment environment now.”

There are two big changes to consider. For one, practically every company is now a technology company. Silicon Valley used to make technology that mainstream consumers didn’t care about—or didn’t know that they even used. Not so, today. Technology is pervasive throughout the economy and throughout culture, which creates a potential protective effect for investors. “The investments into these companies are creating new business models in massive swaths of the economy, as opposed to being insulated,” Sundararajan said. “Also, a bulk of the money going into these companies is coming from players who are not dependent on the success of tech alone for their future financing.”

This is the second change to consider: Whereas tech investments were once made by a relatively small group of venture capitalists who funded companies that then went public, that’s no longer the case. “Even if you put Uber aside and look at some of the larger recipients of pre-IPO investment over the last few years—it’s a very different cast of characters,” Sundararajan told me. “There are large private equity firms that are much more diversified than, say, Kleiner Perkins was 20 years ago.”

Sundararajan’s referring to Kleiner Perkins Caufield & Byers, the venture capital firm that “all but minted money” in the 1990s, as the writer Randall Smith put it. Back in the day, the company made its investors enormous sums of money with early investments in Google and Amazon, but has stumbled in recent years.

All of this means that the investment infrastructure supporting technology companies has changed, and that’s largely because of how technology’s place in culture has changed. “If Uber fails—and there’s no guarantee that it will—all of Uber’s investors won’t say, ‘Were we wrong to invest in tech?’” Sundararajan said. “They will say, ‘Did we misread the capabilities of this one company?’”

If anything, Sundararajan says, Uber is getting a tough, public lesson in how not to run a business. The company, for its part, is doubling down on attempts to rebuild its image. In a conference call with reporters on Tuesday, executives for the ride-sharing service expressed support in Travis Kalanick, Uber’s embattled co-founder and CEO.

“By now, it’s becoming increasingly apparent that the issues that are putting Uber in the news frequently don’t have much to do with either its business model or its identity as a tech company,” Sundararajan said. “If there is a serious reduction in Uber’s value over the next year, the lesson that people will take away is one of better corporate governance for early-stage tech companies—so that, as they get into a later stage, they are not in a position where the tradeoffs they made early-on ended up being more harmful than good.”

Meanwhile, many investors are shifting their focus away from platforms and to the underlying technologies that, if they succeed, will outlast any given brand—for example, sensors for self-driving cars, autonomous medical technologies, myriad robotics, and so on. This, too, has an insulating effect against any single company’s failure.

Uber, which may or may not fail, may or may not bring down the rest of the economy with it. But the bubble is still likely to burst sooner or later. “There was this fog hanging over Silicon Valley in 2001,” Roelof Botha, a partner with VC firm Sequoia Capital, told Bloomberg Businessweek last fall. “And there’s a fog hanging over it now. There’s no underlying wave of growth.”

Since 2015, CB Insights has counted 117 down rounds in tech, instances when a company raises more money by selling existing shares at reduced value. A down round doesn’t mean a company will fail, but it does signal a warning about the market it’s operating in.

The lesson here is that people trying to raise venture capital shouldn’t be worried about what Uber, specifically, might do to the economy if the company fails. There are plenty of other hints that a market correction is already well under way. The question now is whether the bubble will pop as dramatically as it has before, or simply go right on deflating the way it seems to be.

How the Rise of Electronics Has Made Smuggling Bombs Easier
March 21st, 2017, 04:11 PM

Last February, a Somali man boarded a Daallo Airlines flight in Mogadishu, Somalia’s capital. Twenty minutes after the flight took off, the unassuming laptop in his carry-on bag detonated, blowing a hole in the side of the plane. The bomber was killed, and two others were injured. But if the aircraft had reached cruising altitude, an expert told CNN, the bomb would have ignited the plane’s fuel tank and caused a second, potentially catastrophic blast.

The Daallo explosion was one of a handful of terrorist attacks that the Department of Homeland Security cited to help explain why it introduced new rules for some passengers flying to the U.S. with electronics. Starting this week, travelers on U.S.-bound flights from 10 airports in the Middle East and North Africa will be required to check all electronic items larger than a smartphone.

A senior administration official told reporters Monday night that the indefinite electronics ban was a response to continuing threats against civil aviation, but wouldn’t elaborate on the specific nature or the timing of the threat. Adam Schiff, the ranking member of the House Intelligence Committee, said in a statement that the ban was “necessary and proportional to the threat,” and that terrorists continue to come up with “creative ways to try and outsmart detection methods.”

The specificity of the new rules could hint at the nature of the intelligence it’s based on, says Justin Kelley, the vice president for operations at MSA Security, a large private firm that offers explosive-screening services. The ban could be focused on simply separating items like laptop-bombs from passengers who would need to access them in order to set them off, Kelley says. A transcript of our conversation, lightly edited for concision and clarity, follows.


Kaveh Waddell: How much has bomb-smuggling technology changed since Richard Reid tried to hide explosives in a shoe in 2001?

Justin Kelley: It’s pretty common, and it was common even before Reid. But now, everything we have on our person has some sort of power source to it, and that’s what they’re looking for. Everything from a laptop to a phone to an iPad—most of those restricted items they want now in checked baggage—they all have a power source, which is what bombers are generally looking for: something to kick off their device.

Waddell: What’s most difficult about designing a bomb that’s hard to detect, and small enough to fit into something like a shoe or underwear?

Kelley: The bombs in underwear were pretty rudimentary—they needed a human element. But if we’re looking at an electronic device, they can be done a whole host of different ways. They don’t necessarily need an actor to set off the device.

The electronic version has been around since we’ve had cellphones. Even before cellphones, you could use a greeting card that sings a holiday tune or a birthday wish—those use power as well. There’s a whole host of things that can be used to initiate a device. But now, we travel with all these electronics on our person, we need to look even harder.

Waddell: DHS said the new ban was created in response to a threat. How do authorities monitor the state of adversaries’ bomb-making skills to have a sense of what to watch for in airports?

Kelley: That intelligence could have been gathered through social media, or people they’re monitoring. Terrorist groups are always changing and adapting to what we put forth as security principles, so this is just another step. When liquids were banned from planes, that was also a product of intelligence, and I’m not surprised they don’t want to disclose the source.

Waddell: What kind of extra screening might luggage be subject to in checked bags that they might not be if they were carried on?

Kelley: Anything on those planes is going to be screened, whether it’s passenger-carried or cargo. This may have been driven by intelligence that someone would use power sources during a flight, that there would need to be some human interaction.

If they were interested in banning electronics entirely, there would be a stronger restriction.

Waddell: So you’re saying that it’s not necessarily that it’s easier to detect a bomb in checked baggage—it could be that separating a person from their electronics breaks a necessary link for using it as a bomb?

Kelley: Yeah, it tells me they want to separate the human element from the device. That’s what jumped out at me. That could be part of the reason.

Waddell: Would it be that difficult to check a laptop that would be set to detonate at a certain time or altitude?

Kelley: No, in fact, we have seen that in the past. That’s why I think this specific ban was driven by specific intelligence that they’ve gathered. The Reid-type device was human driven. If their concern was about a device in the belly of the plane, I think they’d have imposed other restrictions, but this just says, “You can fly with it; you just can’t fly with it on your person.”

Waddell: The scope of the ban is pretty limited right now: It only covers direct flights to the U.S. from 10 airports in the Middle East and North Africa. Could this be broadened at some point? Might this be a pilot program that’ll end up being implemented elsewhere?

Kelley: I think that comes down to how comfortable DHS is with security at these host countries. Our hope is that other countries that other countries follow TSA-like guidelines for screening—but some don’t. Those that are at or near our standard wouldn’t be part of the ban.

Waddell: Earlier today, the U.K. introduced a similar ban. If enough other countries jump on board, could this become standard practice?

Kelley: No doubt. And once someone steps up with a concern, with real-time information sharing, I think you might see quite a few more countries jump on as well.

What Happens When the President Is a Publisher, Too?
March 21st, 2017, 04:11 PM

It had to be Twitter. What other platform could a member of Congress use during a high-profile congressional hearing to keep tabs on the president’s reaction to that very hearing?

Not TV. Not radio. Certainly not a crinkly newspaper full of yesterday’s news.

But on Twitter, it’s possible to be sitting in a room full of your colleagues, surreptitiously scrolling on your mobile phone, and notice that, hey, whaddya know, President Donald Trump is tweeting again.

At a House Intelligence Committee hearing on Monday, Jim Himes decided to share some of those tweets with the men who were there being questioned—the FBI director James Comey and the NSA director Mike Rogers—along with the rest of the room, and the public. Here’s how it went down:

Himes: Gentlemen, in my original questions to you, I asked you whether the intelligence community had undertaken any sort of study to determine whether Russian interference had had any influence on the electoral process, and I think you told me the answer was no.

Rogers: Correct. We said the U.S. intelligence community does not do analysis or reporting on U.S. political process or U.S. public opinion...

Himes: So, thanks to the modern technology that’s in front of me right here, I’ve got a tweet from the president an hour ago saying, “The NSA and FBI tell Congress that Russia did not influence the electoral process.” So that’s not quite accurate, that tweet?

Comey: I’m sorry, I haven’t been following anybody on Twitter while I’ve been sitting here.

Himes: I can read it to you. It says, “The NSA and FBI tell Congress that Russia did not influence the electoral process.” This tweet has gone out to millions of Americans—16.1 million to be exact. Is the tweet as I read it to you—“The NSA and FBI tell Congress that Russia did not influence the electoral process”—is that accurate?

Comey: Well. It’s hard for me to react. Let me just tell you what we understand... What we’ve said is: We’ve offered no opinion, have no view, have no information on potential impact because it’s never something that we looked at.

Himes: Okay. So it’s not too far of a logical leap to conclude that the assertion—that you have told the Congress that there was no influence on the electoral process—is not quite right.

Comey: It certainly wasn’t our intention to say that today because we don’t have any information on that subject. That’s not something that was looked at.

The most telling aspect of this exchange is the nearly three seconds it takes for Comey and Rogers to react to Himes. They seem dumbfounded at first. Rogers does a little shake of his head and smirks. And, for once, it seems the moment of disbelief wasn’t—or at least wasn’t only—directed at the substance of the president’s tweet, but at the very fact of it.

In 2017, the president’s habit of spreading misinformation on Twitter is being fact-checked, nearly in real time, by members of Congress. Surely, a president has never before interjected himself into a congressional hearing this way?

It’s really worth watching the video.

As my colleague McKay Coppins wrote, we don’t actually know whether the president personally authored these tweets. “According to the @POTUS Twitter bio, they are mostly written by Trump’s social media director Dan Scavino. But if nothing else, the aide was taking his cues from the boss,” Coppins wrote.

Though Trump’s bombastic Twitter presence is a well-worn part of his shtick—or, um, personal brand—Monday’s episode shows he’s increasingly leveraging it for a new kind of punditry. (Possibly also a new kind of propaganda.) True to his reality-television instincts, Trump appeared primed for a fight Monday morning, before the hearing even began, when he used his personal Twitter account to deride coverage of the Russia scandal as “FAKE NEWS and everyone knows it!”

What everyone actually knows, or should by now, is that while Trump claims to hate “the media,” he is himself an active publisher. And when the Trump administration talks about the press as “the opposition,” that may be because Trump is himself competing with traditional outlets in the same media environment, using the same publishing tools. It’s no wonder there was so much speculation about Trump possibly launching his own TV network to rival Fox. It’s also no wonder that Trump recently suggested he owes his presidency to Twitter, which he has used to blast critics and spout conspiracy theories since at least 2011.

“I think that maybe I wouldn’t be here if it wasn’t for Twitter,” he told the Fox News correspondent Tucker Carlson during an interview that aired last week, “because I get such a fake press, such a dishonest press.”

“So the news is not honest,” Trump went on. “Much of the news. It’s not honest. And when I have close to 100 million people watching me on Twitter, including Facebook, including all of the Instagram, including POTUS, including lots of things—but we have—I guess pretty close to 100 million people. I have my own form of media.”

Trump’s right. He does have his own form of media. But he should also know this: Some Americans may be ambivalent about the truth. Politicians lie all the time and get away with it. But nobody likes the dishonest media.

The Like Button Ruined the Internet
March 21st, 2017, 04:11 PM

Here’s a little parable. A friend of mine was so enamored of Google Reader that he built a clone when it died. It was just like the original, except that you could add pictures to your posts, and you could Like comments. The original Reader was dominated by conversation, much of it thoughtful and earnest. The clone was dominated by GIFs and people trying to be funny.

I actually built my own Google Reader clone. (That’s part of the reason this friend and I became friends—we both loved Reader that much.) But my version was more conservative: I never added any Like buttons, and I made it difficult to add pictures to comments. In fact, it’s so hard that I don’t think there has ever been a GIF on the site.

I thought about building new social features into my clone until I heard my friend’s story. The first rule of social software design is that more engagement is better, and that the way you get engagement is by adding stuff like Like buttons and notifications. But the last thing I wanted was to somehow hurt the conversation that was happening, because the conversation was the whole reason for the thing.

Google Reader was engaging, but it had few of the features we associate with engagement. It did a bad job of giving you feedback. You could, eventually, Like articles that people shared, but the Likes went into an abyss; if you wanted to see new Likes come in, you had to scroll back through your share history, keeping track in your head of how many Likes each share had the last time you looked. The way you found out about new comments was similar: You navigated to reader.google.com and clicked the “Comments” link; the comments page was poorly designed and it was hard to know exactly how many new comments there had been. When you posted a comment it was never clear that anyone liked it, let alone that they read it.

When you are writing in the absence of feedback you have to rely on your own judgment. You want to please your audience, of course. But to do that you have to imagine what your audience will like, and since that’s hard, you end up leaning on what you like.

Once other people start telling you what they like via Like buttons, you inevitably start hewing to their idea of what’s good. And since “people tend to be extremely similar in their vulgar and prurient and dumb interests and wildly different in their refined and aesthetic and noble interests,” the stuff you publish will start looking a lot like the stuff that everybody else publishes, because everybody sort of likes the same thing and everybody is fishing for Likes.

What I liked about Reader was that not knowing what people liked gave you a peculiar kind of freedom. Maybe it’s better described as plausible deniability: You couldn’t be sure that your friends didn’t like your latest post, so your next post wasn’t constrained by what had previously done well or poorly in terms of a metric like Likes or Views. Your only guide was taste and a rather coarse model of your audience.

Newspapers and magazines used to have a rather coarse model of their audience. It used to be that they couldn’t be sure how many people read each of their articles; they couldn’t see on a dashboard how much social traction one piece got as against the others. They were more free to experiment, because it was never clear ex-ante what kind of article was likely to fail. This could, of course, lead to deeply indulgent work that no one would read; but it could also lead to unexpected magic.

Is it any coincidence that the race to the bottom in media—toward clickbait headlines, toward the vulgar and prurient and dumb, toward provocative but often exaggerated takes—has accelerated in lock-step with the development of new technologies for measuring engagement?

You don’t have to spend more than 10 minutes talking to a purveyor of content on the web to realize that the question keeping them up at night is how to improve the performance of their stories against some engagement metric. And it’s easy enough to see the logical consequence of this incentive: At the bottom of article pages on nearly every major content site is an “Around the Web” widget powered either by Outbrain or Taboola. These widgets are aggressively optimized for clicks. (People do, in fact, click on that stuff. I click on that stuff.) And you can see that it’s mostly sexy, sexist, and sensationalist garbage. The more you let engagement metrics drive editorial, the more your site will look like a Taboola widget. That’s the drain it all circles toward.

And yet we keep designing software to give publishers better feedback about how their content is performing so that they can give people exactly what they want. This is true not just for regular media but for social media too—so that even an 11-year-old gets to develop a sophisticated sense of exactly what kind of post is going to net the most Likes.

In the Google Reader days, when RSS ruled the web, online publications—including blogs, which thrived because of it—kept an eye on how many subscribers they had. That was the key metric. They paid less attention to individual posts. In that sense their content was bundled: It was like a magazine, where a collection of articles is literally bound together and it’s the collection that you’re paying for, and that you’re consuming. But, as the journalist Alexis Madrigal pointed out to me, media on the web has come increasingly un-bundled—and we haven’t yet fully appreciated the consequences.

When content is bundled, the burden is taken off of any one piece to make a splash; the idea is for the bundle—in an accretive way—to make the splash. I think this has real consequences. I think creators of content bundles don’t have as much pressure on them to sex up individual stories. They can let stories be somewhat unattractive on their face, knowing that readers will find them anyway because they’re part of the bundle. There is room for narrative messiness, and for variety—for stuff, for instance, that’s not always of the moment. Like an essay about how oranges are made so long that it has to be serialized in two parts.

Conversely, when media is unbundled, which means each article has to justify its own existence in the content-o-sphere, more pressure than most individual stories can bear is put on those individual stories. That’s why so much of what you read today online has an irresistible claim or question in the title that the body never manages to cash in. Articles have to be their own advertisements—they can’t rely on the bundle to bring in readers—and the best advertising is salacious and exaggerated.

Madrigal suggested that the newest successful media bundle is the podcast. Perhaps that’s why podcasts have surged in popularity and why you find such a refreshing mixture of breadth and depth in that form: Individual episodes don’t matter; what matters is getting subscribers. You can occasionally whiff, or do something weird, and still be successful.

Imagine if podcasts were Twitterized in the sense that people cut up and reacted to individual segments, say a few minutes long. The content marketplace might shift away from the bundle—shows that you subscribe to—and toward individual fragments. The incentives would evolve toward producing fragments that get Likes. If that model came to dominate, such that the default was no longer to subscribe to any podcast in particular, it seems obvious that long-running shows devoted to niches would starve.

* * *

People aren’t using my Reader clone as much anymore. Part of it is that it’s just my friends on there, and my friends all have jobs now, and some of them have families, but part of it, I think, is that every other piece of software is so much more engaging, in the now-standard dopaminergic way. The loping pace of a Reader conversation—a few responses per day, from a few people, at the very best—isn’t much match for what happens on Twitter or Facebook, where you start getting likes in the first few minutes after you post.

But the conversations on Reader were very, very good.

Abu Dhabi to Los Angeles: 17 Hours Without a Laptop
March 21st, 2017, 04:11 PM

Updated at 8:45 a.m.

The Department of Homeland Security will no longer allow passengers to carry electronics onto flights to the U.S. from 10 major airports in the Middle East and North Africa. Devices larger than a mobile phone—including laptops, tablets, and cameras—will need to be placed in checked baggage.

The airports are located in eight countries: Egypt, Jordan, Kuwait, Morocco, Qatar, Saudi Arabia, Turkey, and the United Arab Emirates. (Two airports were designated in Saudi Arabia and in the UAE.) Nine airlines—none of them American or European—will be responsible for enforcing the rules. The Department of Homeland Security said about 50 flights a day will be affected by the rules.

The ban was communicated to the relevant airlines and airports at 3 a.m. Eastern on Tuesday, in the form of an emergency amendment to a security directive. From that point, the airlines and airports will have 96 hours to comply. If they fail to, a senior administration official told reporters, “we will work with the Federal Aviation Administration to pull their certificate, and they will not be allowed to fly to the United States.”

The ban on larger electronics was developed in response to a “continuing threat to civil aviation,” according to another official, who would not say whether the threat had developed recently, or when the ban might be lifted. DHS is concerned about a trend of bombs being disguised as consumer items, like shoes, a printer, and even a laptop. The official said the data on checked electronics would not be searched.

Items in checked baggage are generally subjected to intense screening, and security officers sometimes open bags to look through them by hand. Requiring electronics to be checked could allow the Transportation Security Administration to scan them more closely than they would otherwise. In 2014, TSA officers began asking passengers to power on their devices to prove that they’re real—and not just a clever disguise for an explosive.

But relegating most electronics to a plane’s cargo hold comes with potential dangers. In the past, the Federal Aviation Administration has expressed concerns about checking too many lithium-ion batteries—the sort that power laptops—because they can catch fire. A senior administration official said the FAA is sharing information about “best practices” for transporting electronics with the affected airlines.

It will be up to airlines to differentiate between smartphones, which will be allowed in airplane cabins, and tablets, which will need to be checked. Some large smartphones, often called “phablets,” blur this boundary.

Royal Jordanian, the state airline of Jordan, was the first to notify its passengers of the new rules on Monday afternoon. The carrier, which operates flights to New York City, Detroit, and Chicago multiple times a week, announced the change in a tweet—which it went on to delete several hours later.

The tweet sparked hours of confusion, during which U.S. officials were tight-lipped. The Jordanian airline’s statement said only that the new policy “follow[ed] instructions from the concerned U.S. departments.”

A spokesperson for the Jordanian Embassy in Washington, D.C. said the airline’s policy was requested by the State Department. A senior administration official told reporters that the State Department had begun notifying governments of the upcoming ban on Sunday.

Edward Hasbrouck, a travel expert and consultant to The Identity Project, said government has a history of announcing big policy changes with little notice. “This reminds me of the chaos when the DHS started restricting liquids, which occurred with no warning and people found out only at the airport,” he said.

Hacking Tools Get Peer Reviewed, Too
March 20th, 2017, 04:11 PM

In September 2002, less than a year after Zacarias Moussaoui was indicted by a grand jury for his role in the 9/11 attacks, Moussaoui’s lawyers lodged an official complaint about how the government was handling digital evidence. They questioned the quality of the tools the government had used to extract data from some of the more than 200 hard drives that were submitted as evidence in the case—including one from Moussaoui’s own laptop.

When the government fired back, it leaned on a pair of official documents for backup: two reports produced by the National Institute of Standards and Technology (NIST) that described the workings of the software tools in detail. The documents showed that the tools were the right ones for extracting information from those devices, the government lawyers argued, and that they had a track record of doing so accurately.

It was the first time a NIST report on a digital-forensics tool had been cited in a court of law. That its first appearance was in such a high-profile case was a promising start for NIST’s Computer Forensics Tool Testing (CFTT) project, which had begun about three years prior. Its mission for nearly two decades has been to build a standardized, scientific foundation for evaluating the hardware and software regularly used in digital investigations.

Some of the tools investigators use are commercially available for download online, for relatively cheap or even free; others are a little harder for a regular person to get their hands on. They’re essentially hacking tools: computer programs and gadgets that hook up to a target device and extract their contents for searching and analysis.

“The digital evidence community wanted to make sure that they were doing forensics right,” said Barbara Guttman, who oversees the Software Quality Group at NIST. That community is made up of government agencies—like the Department of Homeland Security or the National Institute of Justice, the Justice Department’s research arm—as well as state and local law enforcement agencies, prosecuting and defense attorneys, and private cybersecurity companies.

In addition to setting standards for digital evidence-gathering, the reports help users decide which tool they should use, based on the electronic device they’re looking at and the data they want to extract. They also help software vendors correct bugs in their products.

Today, the CFTT’s decidedly retro webpage—emblazoned with a quote from an episode of Star Trek: The Next Generation—hosts dozens of detailed reports about various forensics tools. Some reports focus on tools that recover deleted files, while others cover “file carving,” a technique that can reassemble files that are missing crucial metadata.

The largest group of reports focuses on acquiring data from mobile devices. Smartphones have become an increasingly valuable source of evidence for law enforcement and prosecutors, because they’re now vast stores of private communication and information—but the sensitive nature of that data has made the government’s attempts to access it increasingly controversial.

“It’s a very fast-moving space, and it’s really important,” Guttman said. “Any case could potentially involve a mobile phone.”

It’s an odd feeling to flip through these public, unredacted government reports, which lay bare the frightful capabilities of commercially available mobile-extraction software. A report published just two weeks ago, for example, describes a tool called MOBILedit Forensic Express, which is made by San Francisco-based Compelson Labs. The tool works on Apple iPhones 6, 6S, and 6S Plus, two versions of Apple’s iPads, as well as several Samsung Galaxy smartphones and tablets. It can extract the following types of information from a mobile device:

… deleted data, call history, contacts, text messages, multimedia messages, files, events, notes, passwords for wifi networks, reminders and application data from apps such as Skype, Dropbox, Evernote, Facebook, WhatsApp, Viber, etc.

The product page for MOBILedit Forensic Express claims the software is capable of cracking passwords and PINs to get into locked phones, but it’s not clear how effective that feature is. Getting into a locked, encrypted smartphone—especially an iPhone—is difficult, and it’s unlikely MOBILedit can bypass every modern smartphone’s security system.

When the FBI tried to break into an iPhone 5C it found at the scene of the 2015 San Bernardino shooting, it initially wasn’t able to access the phone’s data, and asked Apple for help. (Presumably, the FBI would have had access to MOBILedit and other commercial tools.) Apple refused, and the FBI brought a lawsuit against the company—but withdrew it when agents finally found a way in.

Guttman says NIST doesn’t address phone encryption in its testing. “Encryption is certainly an issue for law enforcement access to phones and other digital media, but that issue is outside of our expertise and the type of work we do, which is focused on software quality and software understanding,” she said.

The NIST report on MOBILedit describes how the tool fared against different combinations of smartphones and mobile operating systems. It found, for example, that the tool only obtained the first 69 characters in particularly long iOS notes. Besides that issue and five others, though, the tool largely behaved as it promised it would on iOS devices, the report says.

“None of the tools are perfect,” Guttman said. “You really need to understand the strengths and limitations of the tools you’re using.”

Unlike some more complex tools, MOBILedit doesn’t require an investigator to open up a smartphone and manipulate its internals directly—the software connects to the target phone with a cord, just like a user might to update his or her device. But law enforcement doesn’t necessarily need to force its way into a phone that it’s interested in searching, either by cracking open its case or by brute-forcing its passcode.

In certain cases, officers can just ask—or pressure—the phone’s owner to open it. That’s what happened when Sidd Bikkannavar, a NASA engineer, was stopped by a customs agent on his way back to his native United States from a vacation: The officer just asked Bikkannavar to turn over his PIN, wrote it down, and took his smartphone to another room for about half an hour. When the agent returned the phone, he said he’d run “algorithms” to search for threats. It’s possible Bikkannavar’s phone was searched with one of the mobile acquisition tools that DHS has tested.

The government’s growing library of forensic tool reports is supplemented by other testers. Graduate students at the Forensic Science Center at Marshall University in West Virginia, for example, do some of the same sorts of testing that NIST does. They often work with West Virginia State Police, which runs its own digital forensics lab on campus, to test extraction tools before they’re deployed. They post their results online, just like NIST does, to grow the body of shared knowledge about these tools.

“If we weren’t validating our software and hardware systems, that would come up in court,” said Terry Fenger, the director of Marshall’s Forensic Science Center. “Part of the validation process is to show the courts that the i’s were dotted and t’s crossed.”

A new NIST project called “federated testing” will make it easier for others to pitch in with their own test reports. It’s a free, downloadable disk image that contains all the tools needed to test certain types forensic software, and automatically generate a report. The first report from the project came in recently—from a public defender’s office in Missouri, an indication that digital forensics isn’t just the realm of law enforcement.

I asked Fenger if the technical information being made public in these validation reports could help hackers or criminals circumvent them, but he said the validation data probably wouldn’t be of much value to a malicious hacker. “It’s more or less just the nuts and bolts of how things work,” Fenger said. “Most of the hackers out there are way beyond the level of these validations.”

Tech Start-Ups Have Become Conceptual Art
March 17th, 2017, 04:11 PM

Let’s catalog a few important moments in the history of conceptual art:

In 1917, Marcel Duchamp signed and dated a porcelain urinal, installed it on a plinth, and entered it into the first exhibition for the Society of Independent Artists.

In 1961, Robert Rauchenberg submitted a telegram reading “This is a portrait of Iris Clert if I say so” as his contribution to an exhibition of portraits hosted at Clert’s eponymous Paris gallery.

That same year, Piero Manzoni exhibited tin cans labeled “Artist’s Shit.” The cans purportedly contained the feces of the artist, but opening them to verify the claim would destroy the work.

In 2007, Damien Hirst commissioned a diamond-encrusted, platinum cast of a human skull. It cost £14 million to produce, and Hirst attempted to sell it for £50 million—mostly so that it would become the most valuable work sold by a living artist.

And in 2017, Nigel Gifford designed an edible, unmanned drone meant to deliver humanitarian aid to disaster zones.

Okay, I lied. The last one is a technology start-up. But it might as well be a work of conceptual art. In fact, it makes one wonder if there’s still any difference between the two.

* * *

Conceptualism has taken many forms since the early 20th century. At its heart, the name suggests that a concept or idea behind work of art eclipses or replaces that work’s aesthetic properties. Some conceptual works deemphasize form entirely. Yoko Ono’s Grapefruit, for example, is a book with instructions on how to recast ordinary life as performance art. Others, like Hirst’s diamond-encrusted skull, lean heavily on the material object to produce effects beyond it. And others, like the pseudonymous graffiti-artist Banksy’s documentary film, Exit Through the Gift Shop, about a street artist who becomes a commercial sensation, deliberately refuse to reveal whether they are elaborate put-ons or earnest portrayals.

In each case, the circulation of the idea becomes as important—if not more so—than the nature of the work itself. And circulation implies markets. And markets mean money, and wealth—matters with which art has had a long and troubled relationship. By holding business at a distance in order to critique it, the arts may have accidentally ceded those critiques to commerce anyway.

Before art was culture it was ritual, and the ritual practice of art was tied to institutions—the church, in particular. Later, the Renaissance masters were bound to wealthy patrons. By the time the 20th-century avant-garde rose to prominence, the art world—all of the institutions and infrastructure for creating, exhibiting, selling, and consuming art—had established a predictable pattern of embrace and rejection of wealth. On the one hand, artists sought formal and political ends that questioned the supposed progress associated with industrial capitalism. But on the other hand, exhibition and collection of those works were reliant on the personal and philanthropic wealth of the very industrialists artists often questioned.

One solution some artists adopted: to use art to question the art world itself. Such is what Duchamp and Rauchenberg and Manzoni and Hirst all did, albeit obliquely. Others were more direct. Hans Haacke, for example, used artwork to expose the connections between the art and corporate worlds; his exhibitions looked more like investigative reports than installations.

Despite attempts to hold capital at arms length, money always wins. Artists low and high, from Thomas Kinkade to Picasso, have made the commercialization of their person and their works a deliberate part of their craft.

By the 1990s, when Hirst rose to prominence, high-art creators began embracing entrepreneurship rather than lamenting it. Early in his career, Hirst collaborated with the former advertising executive and art collector Charles Saatchi, who funded The Physical Impossibility of Death in the Mind of Someone Living, a sculpture of a severed tiger shark in three vats of formaldehyde. That work eventually sold for $12 million. Hirst’s relationship with Saatchi was less like that of a Renaissance master to a patron, and more like that of a founder to a venture capitalist. The money and the art became deliberately inextricable, rather than accidentally so.

Banksy, for his part, has often mocked the wealthy buyers who shelled out six-figure sums for his stenciled art, and even for his screenprints. It’s a move that can’t fail, for the artist can always claim the moral high ground of supposed resistance while cashing the checks of complicity.                                                           

Hirst and Banksy have a point: Cashing in on art might have become a necessary feature of art. The problem with scoffing at money is that money drives so much of the world that art occupies and comment on. After the avant-garde, art largely became a practice of pushing the formal extremes of specific media. Abstract artists like Mark Rothko and Jackson Pollock pressed the formal space of canvas, pigment, and medium to its breaking point, well beyond representation. Duchamp and Manzoni did the same with sculpture. And yet, artists have resisted manipulating capitalism directly, in the way that Hirst does. In retrospect, that might have been a tactical error.

* * *

If markets themselves have become the predominant form of everyday life, then it stands to reason that artists should make use of those materials as the formal basis of their works. The implications from this are disturbing. Taken to an extreme, the most formally interesting contemporary conceptual art sits behind Bloomberg terminals instead of plexiglass vitrines. Just think of the collateralized debt obligations and credit default swaps that helped catalyze the foundation of the 2008 global financial crisis. These are the Artist’s Shit of capitalism, daring someone to open them and look. The result, catastrophic though it was, was formally remarkable as a work made of securities speculation, especially for those who ultimately profited from collapsing the world economy. What true artist wouldn’t dream of such a result?

Even so, finance is too abstract, too extreme, and too poorly aestheticized to operate as human culture. But Silicon Valley start-ups offer just the right blend of boundary-pushing, human intrigue, ordinary life, and perverse financialization to become the heirs to the avant-garde.

Take Nigel Gifford’s drone start-up, Windhorse Aerospace, which makes the edible humanitarian relief drone. In the event of disasters and conflict, the start-up reasons, getting food and shelter to victims is difficult due to lost infrastructure. The drone, known as Pouncer, would be loaded with food and autonomously flown into affected areas. Whether in hope or naivety, Windhorse claims that Pouncer will “avoid all infrastructure problems, corruption or hostile groups,” although one might wonder why bright green airplanes might avert the notice of the corrupt and the hostile.

The product epitomizes the conceit of contemporary Silicon Valley. It adopts and commercializes a familiar technology for social and political benefit, but in such a simplistic way that it’s impossible to tell if the solution is proposed in earnest or in parody. Pouncer can be seen either as a legitimate, if unexpected, way to solve a difficult problem, or as the perfect example of the technology industry’s inability to take seriously the problems it claims to solve. How to feed the hungry after civil unrest or natural disaster? Fly in edible drones from the comfort of you co-working space. Problem solved!

It’s not Gifford’s first trip up where the air gets light, either. His last company, Ascenta, was acquired by Facebook in 2014 for $20 million. Once under Facebook’s wing, Gifford and his team built Aquila, the drone meant to deliver internet connectivity to all people around the globe. Here too, an idea—global connectivity as a human right and a human good—mates to both formal boundary-pushing and commercial profit-seeking. By comparison to Mark Zuckerberg’s desire to extract data (and thereby latent market value) from every human being on earth, it’s hard to be impressed at a wealthy British artist trying to flip a diamond-encrusted skull at 300 percent profit.

Conceptualism has one gimmick—that the idea behind the work has more value than the work itself. As it happens, that’s not a bad definition of securitization, the process of transforming illiquid assets into financial instruments. Whether Windhorse’s edible drones really work, or whether they could effectively triage humanitarian crises is far less important in the short term than the apparent value of the concept or the technology. If humanitarian aid doesn’t work out, the company can always “pivot” into another use, to use that favorite term of start-ups. What a company does is ultimately unimportant; what matters is the materials with which it does things, and the intensity with which it pitches those uses as revolutionary.

This routine has become so common that it’s become hard to get through the day without being subjected to technological conceptualism. On Facebook, an advertisement for a Kickstarter-funded “smart parka” that hopes to “re-invent winter coats” and thereby to “hack winter.” A service called Happify makes the foreboding promise, “Happiness. It’s winnable.” Daphne Koller, the co-founder of the online-learning start-up that promised to reinvent education in the developing world like Windhorse hopes to do with the airdrop, quits to join Google’s anti-aging group Calico. Perhaps she concluded that invincibility would be a more viable business prospect than education.

Me-too tech gizmos and start-ups have less of an edge than conceptual art ever did. Hirst’s work, including the diamond skull and the taxidermied shark, are memento mori—symbols of human frailty and mortality. Even Rauchenberg’s telegram says something about the arbitrariness of form and the accidents of convention. By contrast, when technology pushes boundaries, it does so largely rhetorically—by laying claim to innovation and disruption rather than embodying it. But in so doing, it has transformed technological innovation into the ultimate idea worthy of pursuit. And if the point of conceptual art is to advance concepts, then the tech sector is winning at the art game.

* * *

Today, the arts in America are at risk. President Donald Trump’s new federal budget proposes eliminating the National Endowment for the Arts (along with the National Endowment for the Humanities, and the Corporation for Public Broadcasting). The NEA is especially cheap, making its proposed elimination symbolic more than fiscal. It’s a dream some Republicans have had for decades, thanks in part to the perception that NEA-funded programs are extravagances that serve liberalism.

The potential gutting of the NEA is worthy of concern and lamentation. But equally important, and no less disturbing, is the fact that the role of art, in part, had already shifted from the art world to the business world anyway. In particular, the formal boundary-pushing central to experimental and conceptual artists might have been superseded by the conceptual efforts of entrepreneurship. The much better-funded efforts, at that. As ever, money is the problem for art, rather than a problem within it.

Elsewhere in the art world, successful works have become more imbricated with their financial conditions. Earlier this year, Banksy opened the Walled Off Hotel, an “art hotel” installation in Bethlehem. It’s an idea that demands reassurance; the first entry on the project’s FAQ asks, “Is this a joke?” (“Nope—it's a genuine art hotel,” the page answers.) Despite the possible moral odiousness of Palestinian-occupation tourism, local critics have billed it as a powerful anti-colonialist lampoon. A high-art theme park.

It’s an imperfect solution. But what is the alternative? In the tech industry, the wealthy don’t tend to become arts collectors or philanthropists. Unlike Charles Saatchi, they don’t take on young artists as patrons, even if just to fuel their own egos. Instead they start more companies, or fund venture firms, or launch quasi non-profits. Meanwhile, traditional arts education and funding has become increasingly coupled to technology anyway, partly out of desperation. STEAM adds “art” to STEM’s science, technology, engineering, and math, reframing art as a synonym for creativity and innovation—the conceptual fuel that technology already advances as its own end anyway.

Looking at Duchamp’s urinal and Rauchenberg’s telegram, the contemporary viewer would be forgiven for seeing them as banal. Today, everyone transforms toilets into artworks on Instagram. Everyone makes quips on Twitter that seem less clever as time passes. What remains are already-wealthy artists funding projects just barely more interesting than the products funded by other, already-wealthy entrepreneurs.

From that vantage point, the conceptual art avant-garde becomes a mere dead branch on the evolutionary tree that leads to technological entrepreneurship. Everyone knows that ideas are cheap. But ideas that get executed—those are expensive. Even if that implementation adds precious little to the idea beyond making it material. The concept, it turns out, was never enough. It always needed implementation—and the money to do so.

How Monopoly’s New Tokens Betray Its History
March 17th, 2017, 04:11 PM

This week, Hasbro announced the results of an online vote on the future of tokens in the board game Monopoly. The results are startling: the boot, wheelbarrow, and thimble have been expunged from the iconic game, replaced by a Tyrannosaurus rex, rubber ducky, and penguin. Voters passed up over 60 other contenders, among them an emoji and a hashtag. It’s the latest in a series of efforts to update the game, whose onerous play sessions, old-fashioned iconography, and manual cash-counting have turned some players away.

When today’s players play games, digital or tabletop, they identify with their token or avatar. It becomes “them,” representing their agency for the game. So it’s not surprising that players would want pieces with which they feel affinity. But ironically, affinity and choice in Monopoly token selection undermine part of the history of that game, which juxtaposed capitalist excess in an era of destitution.

Monopoly went through many evolutions. It was first invented as The Landlord’s Game, an educational tool published by Lizzie Magie in 1906 to explain and advocate for the Georgist single tax—the opposite take on property ownership that eventually became synonymous with the game (whose design Charles Darrow derived from Magie’s original).

By the 1930s, when Monopoly became popular, economic conditions were very different. To reduce costs of production, early sets included only the paper board, money, and cards needed to play. The tokens were provided by players themselves. As Philip E. Orbanes explains in his book Monopoly: The World's Most Famous Game and How It Got That Way, Darrow’s niece and her friends used bracelet charms and Cracker Jack treats as markers in the game. The sense of choice and identification was still present, to an extent, but the feeling of making do and using things already at hand was more salient. It was the Depression, after all.

When Parker Brothers marketed the complete game that we know today, in the mid-1930s, the company elected to include four of the metal charms direct from the manufacturer that supplied the popular bracelet charms Darrow’s niece had adopted, along with another four of new design. Those original tokens—car, iron, lantern, thimble, shoe, tophat, and rocking horse—were joined by the battleship and cannon soon after.

Despite Hasbro’s attempts to modernize Monopoly, the game is really a period piece. It hides the victory of personal property ownership and rentier capitalism over the philosophy of shared land value in Georgism. And it juxtaposes the economic calamity of the Great Depression with the rising tide of industrialism and monopolism that allowed the few to influence the fates of the many. To play the game with a thimble—that symbol of domesticity and humility—instead of a T-rex, connects players to that history, both in leisure and in economics. Reinventing the game might appear to make it more “relevant” to younger players. But perhaps what today’s Monopoly players really need isn’t easy familiarity and identification, but an invitation to connect to a time when the same game bore different meaning, and embraced different experience.

The Lifesaving Potential of Underwater Earthquake Monitors
March 17th, 2017, 04:11 PM

The seconds between the warning of an impending earthquake and the moment the quake hits can be the difference between life or death. In that time, automatic brakes can halt trains; people can duck for cover or rush for safety. But current warning systems aren’t always where they are needed, and scientists don’t fully understand what determines the size and location of earthquakes. Nearly 10,000 people were killed in earthquakes in 2015, the majority from the devastating Nepal quake. The federal government estimates that earthquakes cause $5.3 billion in damage per year to buildings in the U.S.

Ground-based sensors help warn of quakes, but they have their limits. Now, a group of researchers at Columbia University are taking measurement somewhere new: underwater. They’re designing a system that could lead to faster warnings for people living near areas affected by underwater earthquakes and tsunamis. If they succeed, they could help reduce the damage caused by these natural disasters and save many lives.

I recently visited a laboratory at Columbia’s Lamont-Doherty Earth Observatory, in Rockland County, New York, where a technician was testing pieces of the boxy, three-foot-long underwater seismometers under a microscope. The lab’s floor-to-ceiling shelves were stacked with bright yellow and orange parts that will have to endure crushing pressures on the ocean floor at depths of thousands of feet for years at a time.

The networks of land-based earthquake monitors around the world warn of quakes by watching for changes in pressure and seismic signals. Underwater sensors could more accurately locate underwater earthquakes than ground-based networks, says Spahr Webb, the Lamont-Doherty researcher leading the project, because “the system is designed to be deployed over the top of a large earthquake and faithfully record the size and location of both the earthquake and the tsunami. … By installing pressure and seismic sensors offshore you get a much more accurate determination of location and depth of a nearby earthquake.”

Webb pointed out the crab-like shape of a thick steel shell that is designed to prevent the seismometers from being pried from the sea floor by fishing trawl nets. “Keeping these things where they belong is the key,” he told me.

When they are launched about a year from now, 10 to 15 seismometers will be carefully lowered by a crane from a ship to the seabed. Similar to the land-based monitors, they will contain sensitive pressure sensors and accelerometers to measure and separate out seismic and oceanic signals. These sensors will monitor subduction zones, the areas where one plate of the earth’s crust slides under another. An earthquake produces a tsunami at a subduction  zone when an underwater plate snaps back like a giant spring after it is forced out of position by the collision of an adjacent plate.

According to Webb, the land-based seismometers monitoring the regions that produce the largest tsunamis are sometimes more than 100 miles away, which hinders speed and accuracy. “A big motivation for the offshore observations is the size of the tsunami from any given earthquake has a large uncertainty based on land observations alone,” says Webb. In Japan, after the devastating 2011 earthquake, an expensive cable with numerous sensors was installed offshore to speed up warnings and boost accuracy. Now the Columbia seabed-based seismometers will obtain data in regions of the globe with similar tsunami hazards as Japan to augment land-based early warning systems.

The project is not alone. Columbia’s seismometer system is just one of a wide array of new earthquake-monitoring technologies that are being developed. “There are many exciting techniques coming online,” says Elizabeth Cochran, a geophysicist with the U.S. Geological Survey.

While the ocean depths offer opportunities to monitor quakes close to their source, for instance, watching from space could provide a wider view. Scientists at University College London have proposed launching several small satellites to look for signs of earthquakes using electromagnetic and infrared sensors. So far, experiments have proven that the concept works, but a problem has kept the project from getting off the ground: Electromagnetic and infrared  signals are emitted by all sorts of things, natural as well as man-made.

Dhiren Kataria, one of the leaders of the proposed project, which has been dubbed TwinSat, hopes that using a large enough number of satellites should allow researchers to separate out the seismic from the non-seismic events. Multiple satellites would also provide extensive global coverage, because each would orbit the earth every 90 minutes, he adds.

The TwinSat team has previously failed to get funding from the U.K. Space Agency, but it plans to resubmit its proposal in the next few months. If approved, the team could launch its satellites within three years, Kataria claims. To keep costs low, the satellites are designed to be small and use some off-the-shelf commercial components.

Another approach researchers are using is turning cell phones into science instruments. The app MyShake constantly runs a phone’s motion sensors to analyze how it’s shaking around. If the movement fits the vibrational profile of an earthquake, the app relays this information along with the phone’s GPS coordinates to the app’s creators, the seismological laboratory at the University of California, Berkeley, for analysis.

While the app’s not intended to replace traditional seismic sensor networks like those run by the U.S. Geological Survey, says Richard Allen, the seismological laboratory’s director, it could provide faster and more accurate warnings through vast amounts of crowd-sourced data. More than 250,000 people have downloaded the app since it debuted a year ago.

Quicker warnings like these can be used improve safety by being incorporated right into existing infrastructure. San Francisco’s Bay Area Rapid Transit has integrated Allen’s earthquake warnings into its system so that trains automatically slow when they receive a signal that an earthquake will hit. The system relies on the fact that the electronic signals from monitoring stations travel faster than seismic waves, giving the brakes time to act. “I can push out the warning before many people can feel the tremors,” Allen says.

Even better than faster earthquake warnings would be a way to predict quakes. Researchers at Los Alamos National Laboratory are using artificial intelligence to simulate earthquakes so that they can forecast when they will occur. But Cochran of the USGS doubts it will ever be possible to reliably predict quakes. “Earthquakes are very complex,” she says. “It’s hard to predict such chaotic systems.”

Welcome, Please Remove Your Shoes
March 16th, 2017, 04:11 PM

I hoard slippers—the thin-soled, terry kind that many hotels include in their amenity packages. My house is full of them, some still plastic-wrapped. Shoes that will never be good for anything but indoor wear. Yet to me, they are simply too precious to leave behind.

I grew up in the USSR, where tapochki—indoor slippers—were worn habitually. We changed into them when we came home, leaving the dirt of the outdoors at the entrance. We carried them to school where our fellow students stood guard at the door posted by the principal with the sole purpose of checking our bags for smenka, the change of footwear. Museums provided containers of felt mules by the entrance for visitors to don over boots before entering the halls. And we knew that when we visited a friend, we would be expected to take off our shoes and wear the slippers the host owned just for that occasion. Walking inside a home—any home—while still wearing outdoor shoes was bad form.

* * *

The origin of the habit is mysterious, but tapochki occupy an important part of the Russian psyche. The pragmatic benefits are obvious—casting off outdoor shoes keeps the floors and rugs clean. But the real benefit is symbolic.

A decade ago, a monument to Oblomov—the titular character of Ivan Goncharov’s famous novel about a lackadaisical Russian nobleman—was installed in the city of Ulianovsk. The monument features Oblomov’s couch, with his slippers underneath it. Created by a local welder, the mules celebrate the novelist’s ability to infuse personal objects with a symbolism that captured the Russia of his day. In the novel, Ilya Ilich Oblomov spends most of his waking hours in his robe lying on a couch and doing nothing. The novel had political overtures; it was published two years before the abolishment of serfdom in Russia and has been credited by some as a portrayal of general apathy among the Russian nobility. Oblomov’s robe, the couch, and the slippers represent the hero’s indifference to life outside his home. But they also symbolize the domestic space, the feeling of leaving the worries of the world at the door, and the safety and comfort that only one’s abode can offer.

Personal objects separating the outside and the inside can be found in European paintings as early as the 15th century. In The Arnofini Portrait (1434), Jan Van Eyck included two pairs of pattens—the wooden clogs usually worn over the indoor shoes to protect footwear from the mud and dirt of the outside. The 1514 engraving Saint Gerome in His Study, by Albrecht Durer, also features shoes that seem to indicate domestic use—a pair of mules in the foreground, stored under a bench with books and pillows. Whether they are there to suggest their purpose as outdoor-only footwear or the beginning of the practice of using mules at home we may never know. Yet just as in the Van Eyck’s work, a discarded pair of shoes—the shoes that the subject isn’t wearing at home—may be the indication of a new custom taking hold: a custom of separating footwear into indoor and outdoor.

Around this time, the conquests of the Ottoman Empire brought Eastern habits into the European continent. “[Most Ottoman people] were wearing outdoor shoes over the indoor shoes like galoshes,” explains Lale Gorunur, the curator of the Sadberk Hanim Museum in Istanbul. “But they’d never go indoors with outdoor shoes. They’d always take off the outdoor shoes at the gate of the house.” Territories under the empire’s rule seemed to adopt this habit, and slippers remain common in countries like Serbia and Hungary.

“We have the tradition of indoor shoes because we were under the Ottoman rule,” confirms Draginja Maskareli, a curator at the Textile and Costume Department of the Museum of Applied Art in Belgrade. When she was a student in the early 1990s, Maskareli visited cousins of Serbian origin in Paris, to which she traveled with slippers in tow. “They were shocked that we had indoor shoes.”

Although the late 20th-century Parisians seemed amused at the idea, their predecessors were enamored with indoor shoes. “By the 17th century, an increasing number of men are having portraits done of themselves in a kind of casual, domestic setting in their mules, their slippers,” explains Elizabeth Semmelhack, a curator of the Bata Shoe Museum in Toronto. “By the 18th century, where intimacy and intimate gatherings become very much a part of social culture, you begin to see more pictures of women and their mules.”

The Victorian era added its own twist to the infatuation with the indoor shoe. Women used Berlin wool work, a needlepoint style popular at the time, to make the uppers of their husband’s home slippers. “[They] would take those uppers to a shoemaker who would then add a sole. And they would be gifted to the husband to wear while he is smoking his pipe by the fire in the evening,” says Semmelhack.

Portraits of the Russian upper classes of the 18th and 19th century frequently feature subjects in either the Ottoman style mules or in thin—intended for indoor use—slipper-shoes. The same couldn’t be said for the poor. Peasants and laborers are either shown barefoot, wearing boots meant for outdoor work, or donning valenki, the traditional Russian felt boot. Perhaps because of this link between the indoor footwear and the leisure of the rich, tapochki were snubbed immediately following the 1917 Russian Revolution. Remnants of the maligned, old world had no place in the new Soviet paradigm. But the sentiment didn’t stick. Although never as extravagant or ornate as before, soon tapochki were back in most Soviet homes offering their owners comfort after a long day of building the Communist paradise.

Today, attitudes towards taking off shoes indoors vary, often by national culture. An Italian friend told me it was considered rude to go barefoot in the house in Italy, and a Spanish friend raised her eyebrows when I offered a pair of slippers. “Spaniards don’t take their shoes off.”

In Japan, where slippers are a Western introduction, most people take off their outdoor shoes before going indoors. Jordan Sand, a professor of Japanese History at Georgetown University, notes that architecture accommodates the practice. “The Japanese live in dwellings with raised floors. It’s basic, even in modern apartment buildings, that every private dwelling has space at the entry,” he explains. “As you enter the door there is a little space and step up and the rest of the house is higher than the outside. You shed your footwear there. In a traditional house, most of the interior space is covered with tatami mats. No footwear is worn on tatami mats.” While the Japanese generally go either barefoot or wear socks on the mats, there are exceptions. In those parts of the house that aren’t covered by tatami—the kitchen, the hallway, and the toilet—people wear slippers. A singular pair of slippers is reserved specifically for the toilet, where it stays.

* * *

When I moved to the U.S. in 1989, slippers disappeared from my life. Americans never took off their shoes and their wall-to-wall carpeting bore traces of the outside tracked indoors on the soles of their footwear. I could never get used to it. My shoes came off immediately whenever I entered my house and I’ve asked my guests to take off theirs. The panoply of terry mules I have hoarded from hotels is always on hand to help.

As for me—my personal slippers wait for me by the door. When I slip them on my feet are freer, my floors stay cleaner, and I always feel as if I’ve truly come home.


This article appears courtesy of Object Lessons.

Scientists Brace for a Lost Generation in American Research
March 16th, 2017, 04:11 PM

The work of a scientist is often unglamorous. Behind every headline-making, cork-popping, blockbuster discovery, there are many lifetimes of work. And that work is often mundane. We’re talking drips-of-solution-into-a-Petri-dish mundane, maintaining-a-database mundane. Usually, nothing happens.

Scientific discovery costs money—quite a lot of it over time—and requires dogged commitment from the people devoted to advancing their fields. Now, the funding uncertainty that has chipped away at the nation’s scientific efforts for more than a decade is poised to get worse.

The budget proposal President Donald Trump released on Thursday calls for major cuts to funding for medical and science research; he wants to slash funding to the National Institutes of Health by $6 billion, which represents about one-fifth of its budget. Given that the NIH says it uses more than 80 percent of its budget on grant money to universities and other research centers, thousands of institutions and many more scientists would suffer from the proposed cuts.

“One of our most valuable natural resources is our science infrastructure and culture of discovery,” said Joy Hirsch, a professor of psychiatry and neurobiology at the Yale School of Medicine. “It takes only one savage blow to halt our dreams of curing diseases such as cancer, dementia, heart failure, developmental disorders, blindness, deafness, addictions—this list goes on and on.”

For decades, scientists have been rattled by the erosion of public funding for their research. In 1965, the federal government financed more than 60 percent of research and development in the United States. “By 2006, the balance had flipped,” wrote Jennifer Washburn a decade ago, in a feature for Discover, “with 65 percent of R&D in this country being funded by private interests.”

This can’t be all bad, can it? Given the culture of competition in Silicon Valley, where world-changing ideas attract billions upon billions of dollars from eager investors, and where many of the brightest minds congregate, we may well be entering a golden era of private funding for science and medicine.

Along with the business side of science, the world’s tech leaders have built a robust philanthropic network for research advancement. The Bill & Melinda Gates Foundation is a major force in the prevention of infectious diseases, for example. Last year, the Facebook founder Mark Zuckerberg launched his own foundation—with his wife, Priscilla Chan, who is a pediatrician—aiming to help “cure, prevent or manage all diseases in our children’s lifetime.” Between those two initiatives alone, billions of dollars will be funneled to a variety of crucial research efforts in the next decade.

But that amount still doesn’t approach $26 billion in NIH research grants that are doled out to scientists every year. For about a decade, stagnant funding at the NIH was considered a serious impediment to scientific progress. Now, scientists say they are facing something much worse.

I asked more than a dozen scientists—across a wide range of disciplines, with affiliations to private schools, public schools, and private foundations—and their concern about the proposed budget was resounding. The consequences of such a dramatic reduction in public spending on science and medicine would be deadly, they told me. More than one person said that losing public funding on this scale would dramatically lower the country’s global scientific standing. One doctor said he believed Trump’s proposal, if passed, would set off a lost generation in American science.

“Where do I start?” said Hana El-Samad, a biochemist at the University of California, San Francisco, School of Medicine and an investigator in the Chan Zuckerberg Biohub program, one of the prestigious new privately funded science initiatives in Silicon Valley. In her research, El-Samad analyzes biological feedback loops, studying how they work so that she can predict their failure in diseases.

“First, we most certainly lose diversity in science—ranging from diversity of topics researched to diversity of people doing the research,” she told me. “Since we don’t know where real future progress will come from, and since history tells us that it can and almost certainly will come from anywhere—both scientifically and geographically—public funding that precisely diversifies our nation’s portfolio is crucial.”

Private funding, on the other hand, is often narrowly focused. Consider, for example, Elon Musk’s obsession with transporting humans to Mars. The astronaut Buzz Aldrin told CNBC this week that Musk’s plan may be well-funded, but it’s not very well thought out—and that cheaper technology isn’t necessarily better. “We went to the moon on government-designed rockets,” Aldrin said.

Even if Musk’s investment in SpaceX does represent a world-changing scientific effort—it’s not enough by itself.

“Funding like the Chan-Zuckerberg Initiative is fantastic and will be transformative for Bay Area Science,” said Katherine Pollard, a professor at the UCSF School of Medicine and a Chan Zuckerberg Biohub researcher who studies the human microbiome. “But the scope and size of even a large gift like this one cannot come close to replacing publicly supported science.” The unrestricted research funding she’s getting as part of her work with the Chan Zuckerberg program is “still only 10 percent of what it costs to run my lab,” she said.

And what happens to all the crucial basic science without billionaire backing—the kind of research with wide-ranging applications that can dramatically enhance human understanding of the world?  NIH funding is spread across all disciplines, several scientists reminded me, whereas private funding tends to be driven by the personal preferences of investors.

Plus, scientific work is rarely profitable on a timescale that delights investors. The tension between making money and making research strides can result in projects being abandoned altogether or pushed forward before they’re ready. Just look at Theranos, the blood-testing company that was once a Silicon Valley darling. As The Wall Street Journal reported last year, even when the company’s technology hadn’t progressed beyond lab research, its CEO was downplaying the severity of her company’s myriad problems—both internally and to investors. Its fall from grace—and from a $9 billion valuation—is a stunning and instructive illustration of where private and public interests in scientific research can clash.

But also, in a privately-funded system, investor interest dictates the kind of science that’s pursued in the first place.

“Put simply, privatization will mean that more ‘sexy,’ ‘hot’ science will be funded, and we will miss important discoveries since most breakthroughs are based on years and decades of baby steps,” said Kelly Cosgrove, an associate professor of psychiatry at Yale University. “The hare will win, the tortoise will lose, and America will not be scientifically great.”

America’s enduring scientific greatness rests largely on the scientists of the future. And relying on private funding poses an additional problem for supporting people early in their careers. The squeeze on public funding in recent years has posed a similar concern, as young scientists are getting a smaller share of key publicly-funded research grants, according to a 2014 study published in the Proceedings of the National Academy of Sciences. In 1983, about 18 percent of scientists who received the NIH’s leading research grant were 36 years old or younger. In 2010, just 3 percent of them were. Today, more than twice as many such grants go to scientists who are over 65 years old compared with people under 36—a reversal from just 15 years ago, according to the report.

The proposed NIH cuts “would bring American biomedical science to a halt and forever shut out a generation of young scientists,” said Peter Hotez, the dean of the National School of Tropical Medicine at Baylor College of Medicine. “It would take a decade for us to recover and move the world's center of science to the U.S. from China, Germany, and Singapore, where investments are now robust."

The cuts are not a done deal, of course. “Congress holds the purse strings, not the president,” said Senator Brian Schatz, a Democrat from Hawaii and a member of the Appropriations Committee, in a statement.

In the meantime, there’s a deep cultural question bubbling beneath the surface of the debate over science funding, one that seems to reflect a widening gap in trust between the public and a variety of American institutions. A Pew survey in 2015 found that more than one-third of people said they believed private investments were enough to ensure scientific progress. And while most people said they believed government investment in basic scientific research “usually” paid off in the long run, other research has showed a sharp decline in public trust in science—notably among conservatives. This erosion of trust means that the politicization around specific areas of scientific inquiry, like climate change and stem-cell research, may have deep consequences for scientific advancement more broadly.

El-Samad, the biochemist, describes this dynamic as the weakening of a social contract that once made the United States the scientific beacon of the world. In her view, there is something almost sacred about using taxpayer dollars to fund research.

Using  “the hard earned cash of the citizens, all of them—has constituted an enduring bond between the scientist and the public,” she told me. “It was clear that we were, as scientists, bound by the necessity to pay them back not in kind, but in knowledge and technology and health. And they, the citizens, took pride and well deserved ownership of our progress. I truly believe that this mutual investment and trust is what made science in the United States of America a model to follow for the rest of the world, and also gave us the tremendous progress of the last decades. Huge setbacks will ensue if this erodes.”

Trump's Cyber Skepticism Hasn't Stopped Charges Against Foreign Hackers
March 15th, 2017, 04:11 PM

President Donald Trump doesn’t put a lot of stock in security researchers’ ability to track down cyberattackers. When the Democratic National Committee’s systems were breached during the presidential campaign, he shrugged and said just about anyone could have been behind the hacks—even though the intelligence community pointed fingers straight at Russian President Vladimir Putin. “Unless you catch ‘hackers’ in the act, it is very hard to determine who was doing the hacking,” he tweeted in December.

Just before Trump was inaugurated, I wondered if his unwillingness to endorse the practice of cyber-attribution would derail the Justice Department’s pattern of bringing indictments and charges against foreign hackers—and even embolden hackers to launch more cyberattacks, without fear of repercussions.

But while the president cast doubt on the worth of attribution, the Justice Department appears to have pressed on with its campaign to slap foreign hackers with public criminal charges. On Wednesday, the department announced charges against four Russians—two intelligence agents and two hired hackers—for the 2014 data breach at Yahoo that compromised 500 million user accounts.

The two agents worked for a branch of the Russian Federal Security Service, or FSB, called the Center for Information Security. That agency is the FBI’s point of contact within the Russian government for fighting cybercrime—but instead of investigating cyberattacks, the FBI alleges that the two officers participated in one.

The indictment accuses the officers, 33-year-old Dmitry Aleksandrovich Dokuchaev and 43-year-old Igor Anatolyevich Sushchin, of hiring a pair of hackers to help them break into Yahoo’s systems. Mary McCord, the acting assistant attorney general in the Justice Department’s national-security division, said the agents are suspected of orchestrating the cyberattack in their official capacity as members of the FSB.

One of the hackers was already notorious. Alexsey Alexseyevich Belan has already been indicted in the U.S. twice—once in 2012 and once in 2013—and was added to the FBI’s list of most-wanted cybercriminals in 2013. The other hacker, Karim Baratov, was brought on to help hack into 80 non-Yahoo accounts, using information gleaned from the accounts that were already compromised. Baratov, who lives in Canada, was arrested on Tuesday. The other three defendants remain at large in Russia, which doesn’t have an extradition agreement with the United States.

According to the indictment, the hackers had access to Yahoo’s networks all the way until September 2016, two years after they first got in.

When the data breach was announced that month, that hack was one of the largest single breaches that had ever been made public. But it was eclipsed in December, when the company announced that another breach, this one from 2013, had compromised one billion user accounts. Yahoo said in December that the two hacks were separate—but that it suspected the “same state-sponsored actor” was behind both hacks.

One of the tricks the Russian hackers used to steal information was to forge cookies—small packages of data that track users and tell browsers which accounts a user is signed into, among other things—in order to access at least 6,500 user accounts, the Justice Department alleges. (The 2013 hack also used forged cookies, according to Yahoo.)

The hackers targeted a wide range of people: government officials, intelligence and law enforcement agents, and employees of an unnamed “prominent Russian cybersecurity company.” They also accessed accounts that belonged to private companies in the U.S. and elsewhere, the indictment claims.

Some of the information was probably useful for the intelligence officers, but Belan, the hired hacker, appears to have used the opportunity presented by the enormous trove of stolen Yahoo accounts to make a little money. He searched emails for credit-card and gift-card numbers, and scraped the contact lists from at least 30 million accounts for use in a large-scale spam campaign.

The FBI is also investigating Russian cyberattacks on the Democratic National Committee, but Wednesday’s indictment doesn’t draw a connection between that event and the Yahoo hack.

As she announced the charges, McCord, the acting assistant attorney general, said additional options for punishing Russia for the hack are still on the table. An executive order that former president Barack Obama signed in March, for example, gave the Treasury Department the power to set up economic sanctions in response to cyberattacks or espionage.

FBI and Justice Department officials have said in the past that bringing public charges against foreign hackers for state-sponsored cyberattacks can deter others from hacking American people and organizations. Belan clearly wasn’t deterred by the charges brought against him in 2012 and 2013, but it’s possible that the prospect of joining the cyber most-wanted list has convinced other, lower-profile hackers not to participate.

Paul Abbate, the executive assistant director in the FBI’s cybercrime branch, said the government has formally requested that Russia send the defendants to be tried in the U.S. But without an extradition treaty, and given that Russia’s own intelligence service is implicated in the indictment, working together will be, as Abbate delicately put it, a challenge. “We can now gauge the level of cooperation we’ll see from them,” he said.