Technology | The Atlantic
Net Neutrality Was Never Enough
December 15th, 2017, 01:38 PM

Ajit Pai, the chairman of the Federal Communications Commission, opens a bag of Cheetos with his teeth, dumps them onto a hipster food-court lunch bowl, and slathers it in Sriracha sauce. He snaps a pic for social media.

It’s a scene from a video, “Seven Things You Can Still Do on the Internet After Net Neutrality,” shot by the conservative outlet The Daily Caller and published Wednesday, the day before the Federal Communications Commission voted to gut rules to treat internet traffic equally. Besides “’gramming your food,” Pai also assures The Daily Caller’s readers they will still be able to take selfies, binge watch Game of Thrones, cosplay as a Jedi, and do the Harlem shake.

Net-neutrality proponents have lambasted the video, and with good reason. A federal appointee charged for stewardship of public communications infrastructure comes off as insolent.

Even so, there’s something undeniably true about the video, which has only been amplified by reactions to the FCC’s vote: The internet that net neutrality might protect is also a petri dish of the pettiness and derision Pai acts out in the video. In addition to being a public good that ought to be regulated, the internet is also an amplifier of panic, malice, and intemperance. Like it or not, those vices helped get the nation into the political moil it currently faces, from internet policy to immigration to taxation to health care—as well as to the validity of elections themselves.

The most important step for the future of the internet, for citizens, politicians, and corporations alike, is to calm down, research, and debate its future. But the internet’s nature might make that impossible.

* * *

As had been expected, the FCC voted yesterday to roll back the Obama-era Open Internet Order, which treated broadband internet service providers—Comcast, Verizon, Time Warner, and their ilk—as common carriers under Title II of the Communications Act. Those protections required ISPs to treat internet traffic equally, preventing them from blocking or otherwise interfering with access to specific websites, apps, or other resources. Under the new rules, dubbed “Restoring Internet Freedom” by the FCC, ISPs would have to disclose any steps they take to limit or sell special access.

The FCC voted in favor of repeal despite widespread support of net neutrality among the American public—and despite the fact that public comment for the new policy appears to have been compromised by millions of fraudulent entries.

Those factors will likely come up in legal challenges to the repeal, which are already mounting. The new rules won’t take effect for at least several months. State attorneys general have begun filing lawsuits. And Congress could adopt legislation that would codify net neutrality into law, a move that activists are encouraging citizens to appeal for. The Democratic senator Ed Markey announced plans for legislation to reverse the FCC’s repeal, and given the bipartisan support for net neutrality among the electorate, it’s possible that such a bill could find support across the aisle.

Possible, but hardly guaranteed. A letter to the FCC from the House Committee on Energy and Commerce supporting the FCC’s action was signed by 107 Republican members of Congress. Of those, Motherboard reported that 84 have taken telco-industry contributions.

Even though the FCC’s action—a 3–2 vote along party lines—has been anticipated since the proposal’s announcement just before Thanksgiving, public response to yesterday’s rollback was severe. On Twitter, a woman posted a video of her 11-year-old sister’s school lunch table shouting “Ajit Pai is a loser.” A Missouri man created a $500,000 crowd-funding campaign (since taken down) to “deport Ajit Pai”—a dense morsel of consumer rights mixed with xenophobia (Pai is Indian American) that typifies the ethos of the internet.

The media’s response has been similarly drastic. Jimmy Kimmel weighed in, calling Pai a “jackhole” who wants to line the pockets of big telco at the cost of the public. Briefly, CNN ran the headline, “End of the Internet as We Know It.”

One such fear, widely held by net-neutrality proponents, is that ISPs might slice up internet service into tiers, as they have done for cable television. Stoking this fear, @therealbanksy, the Twitter account that ostensibly represents the anonymous British artist Banksy, posted a warning: “If you don't want to pay extra for your favorite sites you need to be supporting #NetNeutrality.” Along with it, some hypothetical fees: Twitter: $14.99/month; Netflix: $9.99/movie; Google: $1.99/search. As I write this, it has been retweeted 162,000 times. @therealbanksy, whose profile reads “fan account,” aptly represents the internet itself: billions of people, who might also be dogs, criminals, children, or senators, all jockeying for a shred of one another’s attention at all costs.

Even the FCC hearing itself was disrupted by the internet’s feral anxiety about itself. While details are still uncertain, the meeting was briefly interrupted due to a security threat. After bomb-sniffing dogs cleared the area, the vote resumed. “The left’s outcry at Mr. Pai ‘killing’ internet freedom,” the conservative Wall Street Journal editorial board wrote in response, “has been so overwrought that the FCC meeting room had to be cleared Thursday for a security threat.”

Pai’s Daily Caller video inspired similar indignation. The video appears to feature a cameo by an apologist for Pizzagate, the false conspiracy theory about a Democratic child sex trafficking ring run from a Washington, D.C. pizza joint. The net-neutrality opposition has latched onto this connection, using Pai’s association with the publication as an indictment of his position on common carriage.

The internet has amplified excess, making any one extreme act or idea require an even more extreme response. An arms race for profligacy.

* * *

In truth, nobody yet knows how the net-neutrality rollback will affect anyone—consumers, telcos, big tech, or start-ups. Internet zealots warn of widespread blocking and throttling, not to mention pay-for-play fast lanes that might benefit big companies like Netflix and Google and prevent upstarts from enjoying innovation and growth. ISPs, aware of how hot the issue is, will likely take no immediate action.

When they do, it will probably come in a form invisible to consumers anyway. Pay-for-play deals with big providers might make some services load faster and others slower. Small delays can be fatal for adoption and continued use, and the costs of operating a new business in such an environment might make some start-ups inviable.

As I’ve argued before, progressive advocacy for net neutrality can’t credibly claim to be acting on behalf of consumers and small businesses when venture-backed technology start-ups are the main beneficiary. The dissenting statements of both Democratic FCC commissioners, Mignon Clyburn and Jessica Rosenworcel, give special mention to the “innovators” who might be harmed by dismantling net neutrality: big tech companies that might have to pay tariffs to telcos, or small tech companies that might struggle to do so. Bandwidth-heavy services would be most impacted—the “next Netflix,” as advocates often name it—but it’s not clear that the online-video market hasn’t been taken over by incumbents anyway, like search and social networking have.

Pai has justified the rollback on the grounds that the existing guidance overregulated the telecommunications sector, which operated without formal net-neutrality rules until the Open Internet Order was adopted in 2015. Since then, Pai insists, telco investment in broadband infrastructure has declined. Spurring more and better access, the FCC has decided, is more important than regulating broadband as common carriage.

The 2015 adoption of the Open Internet Order offered leverage to big tech companies like Google (now Alphabet), who might have pressed further into the broadband-service space. After all, some 50 million U.S. homes have only one choice for broadband service, driving service costs up. But instead, in 2016, Alphabet curbed expansion of its residential fiber network, which it began building in 2010. Rolling out fiber is expensive, complicated, and breeds dissatisfaction. Unlike search or docs, services that run in the cloud, fiber has to be installed and maintained in the physical world. In Atlanta, where I live, Google fiber installation caused numerous gas-line breaks, along with less-easily trackable disputes with property owners over digging and repair.

Even once installed, the switching cost of moving from a provider like Comcast to Google is high; people hate waiting for service technicians. It’s just more profitable to sell digital ads against searches and videos that other people make. Google’s net profits in Q3 2017 alone totaled $6.7 billion.

This situation in mind, it’s at least possible that terror over the apparent end of net neutrality might spur broadband investment and competition, especially if providers commit to equal treatment. It’s also possible that small-scale, start-up innovation in broadband access is impossible in America absent a threat to the internet. In the wake of the FCC’s vote, Vice announced plans to create a fiber-backed mesh network for the Brooklyn neighborhood where its offices are located. Such experiments are not new, but it’s unusual for a media company to ponder entering the ISP business. Nothing was stopping Vice from taking such a step before net neutrality reached the precipice—except, perhaps, a credible business justification for doing so, even if just as a branding exercise.

Other, better solutions to broadband competition exist. One is local-loop unbundling, a policy that requires telcos to share last-mile connections with competitors. It’s one of the reasons that broadband is so much cheaper in Europe than it is in the United States. The 1996 Telecommunication Act included an unbundling provision, requiring providers to offer access to their networks at “reasonable” cost when “technically feasible.” The policy spurred competition in DSL, but fiber was too hypothetical at the time, and it wasn’t covered in the act. Even so, small competitors had trouble getting central access for service provisioning once they had last-mile access. The big telcos had no trouble finding ways to argue against technical feasibility.

The problem with regulatory apparatuses like local-loop unbundling is that they are boring. Nobody wants to think about the complicated, messy infrastructure that actually makes it possible for irascible tweets to make it from the phones in people’s hands to the servers on which they are stored. It’s much simpler and more comforting to imagine the internet as the “cloud” of its marketers—an ethereal force that surrounds you and me and everyone. One that, like air or water, sates a basic need of human life.

* * *

In her dissent—a “eulogy,” she even calls it—Rosenworcel, the FCC commissioner, writes, “the future of the internet is the future of everything. That is because there is nothing in our commercial, social, and civic lives that has been untouched by its influence or unmoved by its power.”

This sentiment is both true and terrifying. The idea that a global data network would have so much power and influence should give everyone pause. Not only because it implies that so much of public and private life is conducted by means of that infrastructure. But also because it inspires people—and businesses, and government agencies, and elected officials themselves—to press toward the worst extremes of their character. It’s undeniable that modern society relies on the internet. Less often discussed are the impacts of such a dependence. Until they reach a breaking point, like the compromise of democracy or the mass exposure of personal information.

“Internet access became the dial tone of the digital age,” Rosenworcel’s dissent continues. She understates matters. Instead, it has become this era’s heartbeat. Data has become the blood that courses through the veins of ordinary life. This is why everyone in the debate is so passionate. But it’s also worth remembering that this is just a metaphor. The world is still out there, underneath and above all the fiber-optic lines that would take it online.

When it comes to net neutrality, supporting or opposing it is no longer sufficient. Killing net neutrality probably won’t make things better, but keeping it without any other substantive changes will ensure things get worse—instead of civics, only mania will remain. The internet is as much the enemy as it is the hero of contemporary life. It is not the free and open internet that must be eulogized, but the public’s blindness to its consequences.

‘The Basic Grossness of Humans’
December 15th, 2017, 01:38 PM

Lurking inside every website or app that relies on “user-generated content”—so, Facebook, YouTube, Twitter, Instagram, Pinterest, among others—there is a hidden kind of labor, without which these sites would not be viable businesses. Content moderation was once generally a volunteer activity, something people took on because they were embedded in communities that they wanted to maintain.

But as social media grew up, so did moderation. It became what the University of California, Los Angeles, scholar Sarah T. Roberts calls, “commercial content moderation,” a form of paid labor that requires people to review posts—pictures, videos, text—very quickly and at scale.

Roberts has been studying the labor of content moderation for most of a decade, ever since she saw a newspaper clipping about a small company in the Midwest that took on outsourced moderation work.

“In 2010, this wasn’t a topic on anybody’s radar at all,” Roberts said. “I started asking all my friends and professors. Have you ever heard of people who do this for pay as a profession? The first thing everyone said was, ‘I never thought about it.’ And the second thing everyone said was, ‘Don’t computers do that?’ Of course, if the answer in 2017 is still no, then the answer in 2010 was no.”

And yet there is no sign of these people on a platform like Facebook or Twitter. One can register complaints, but the facelessness of the bureaucracy is total. That individual people are involved in this work has only recently become more well-known, thanks to scholars like Roberts, journalists like Adrian Chen, and workers in the industry like Rochelle LaPlante.

In recent months, the role that humans play in organizing and filtering the information that flows through the internet has come under increasing scrutiny. Companies are trying to keep child pornography, “extremist” content, disinformation, hoaxes, and a variety of unsavory posts off of their platforms while continuing to keep other kinds of content flowing.

They must keep the content flowing because that is the business model: Content captures attention and generates data. They sell that attention, enriched by that data. But what, then, to do with the pollution that accompanies the human generation of content? How do you deal with the objectionable, disgusting, pornographic, illegal, or otherwise verboten content?

The one thing we know for sure is that you can’t do it all with computing. According to Roberts, “In 2017, the response by firms to incidents and critiques of these platforms is not primarily ‘We’re going to put more computational power on it,’ but ‘We’re going to put more human eyeballs on it.’”

To examine these issues, Roberts pulled together a first-of-its-kind conference on commercial content moderation last week at UCLA, in the midst of the wildfires.

For Roberts, the issues of content moderation don’t merely touch on the cost structure of these internet platforms. Rather, they go to the very heart of how these services work. “What does this say about the nature of the internet?” she said. “What are the costs of vast human engagement in this thing we call the internet?”

One panel directly explored those costs. It paired two people who had been content moderators: Rasalyn Bowden, who became a content-review trainer and supervisor at Myspace, and Rochelle LaPlante, who works on Amazon Mechanical Turk and is the cofounder of an organizing platform for people who work on that platform, MTurkCrowd.com. They were interviewed by Roberts and a fellow academic, the University of Southern California’s Safiya Noble.

Bowden described the early days of Myspace’s popularity when suddenly, the company was overwhelmed with inappropriate images, or at least images they thought might be inappropriate. It was hard to say what should be on the platform because there were no actual rules. Bowden helped create those rules and she held up a notebook to the crowd, which was where those guidelines were stored.

“I went flipping through it yesterday and there was a question of whether dental-floss-sized bikini straps really make you not nude. Is it okay if it is dental-floss-size or spaghetti strap? What exactly made you not nude? And what if it’s clear? We were coming up with these things on the fly in the middle of the night,” Bowden said. “[We were arguing] ‘Well, her butt is really bigger, so she shouldn’t be wearing that. So should we delete her but not the girl with the little butt?’ These were the decisions. It did feel like we were making it up as we were going along.”

Bowden said that her team consisted of the odd conglomeration of people that were drawn to overnight work looking at weird and disturbing stuff. “I had a witch, a vampire, a white supremacist, and some regular day-to-day people. I had all these different categories,” Bowden, who is black, said. “We were saying, ‘Based on your experience in white-supremacist land, is this white-supremacist material?’”

That was in the mid-’00s. But as social media, relying on user-generated content, continued to explode, a variety of companies began to need professional content moderators. Roberts has traced the history of the development of moderation as a corporate practice. In particular, she’s looked at the way labor gets parceled out. There are very few full-time employees working out of corporate headquarters in Silicon Valley doing this kind of stuff. Instead, there are contractors, who may work at the company, but usually work at some sort of off-site facility. In general, most content moderation occurs several steps removed from the core business apparatus. That could be in Iowa or in India (though these days, mostly in the Philippines).

“The workers may be structurally removed from those firms, as well, via outsourcing companies who take on CCM contracts and then hire the workers under their auspices, in call-center (often called BPO, or business-process outsourcing) environments,” Roberts has written. “Such outsourcing firms may also recruit CCM workers using digital piecework sites such as Amazon Mechanical Turk or Upwork, in which the relationships between the social-media firms, the outsourcing company, and the CCM worker can be as ephemeral as one review.”

Each of these distancing steps pushes responsibility away from the technology company and into the minds of individual moderators

LaPlante, for example, works on Mechanical Turk, which serves as a very flexible and cheap labor pool for various social-media companies. When she receives an assignment, she will have a list of rules that she must follow, but she may or may not know the company or how the data she is creating will be used.

Most pressingly, though, LaPlante drew attention to the economic conditions under which workers are laboring. They are paid by the review, and the prices can go as low as $0.02 per image reviewed, though there are jobs that pay better, like $0.15 per piece of content. Furthermore, companies can reject judgments that Turkers make, which means they are not paid for that time, and their overall rating on the platform declines.

This work is a brutal and necessary part of the current internet economy. They’re also providing valuable training data that companies use to train machine-learning systems. And yet the people doing it are lucky to make minimum wage, have no worker protections, and must work at breakneck speed to try to earn a living.

As you might expect, reviewing violent, sexual, and disturbing content for a living takes a serious psychological toll on the people who do it.

“When I left Myspace, I didn’t shake hands for like three years because I figured out that people were disgusting. And I just could not touch people,” Bowden said. “Most normal people in the world are just fucking weirdos. I was disgusted by humanity when I left there. So many of my peers, same thing. We all left with horrible views of humanity.”

When I asked her if she’d recovered any sense of faith in humanity, a decade on, Bowden said no. “But I’m able to pretend that I have faith in humanity. That will have to do,” she told me. “It’s okay. Once you accept the basic grossness of humans, it’s easier to remember to avoid touching anything.”

LaPlante emphasized, too, that it’s not like the people doing these content-moderation jobs can seek counseling for the disturbing things they’ve seen. They’re stuck dealing with the fallout themselves, or, with some sort of support from their peers.

“If you’re being paid two cents an image, you don’t have $100 an hour to pay to a psychiatrist,” LaPlante said.

In a hopeful sign, some tech companies are beginning to pay more attention to these issues. Facebook, for example, sent a team to the content-moderation conference. Others, like Twitter and Snap, did not.

Facebook, too, has committed to hiring 10,000 more people dedicated to these issues. Their executives are all clearly thinking about these issues. This week, Facebook Chief Security Officer Alex Stamos tweeted that “there are no magic solutions” to several “fundamental issues” in online speech. “Do you believe that gatekeepers should police the bounds of acceptable online discourse?” he asked. “If so, what bounds?”

This is true. But content moderators already all know that. They’ve been in the room trying to decide what’s decent and what’s dirty. These thousands of people have been acting as the police for the boundaries of “acceptable online discourse.” And as a rule, they have been unsupported, underpaid, and left to deal with the emotional trauma the work causes, while the companies they work for have become the most valuable in the world.

“The questions I have every time I read these statements from big tech companies about hiring people are: Who? And where? And under what conditions?” Roberts told me.

The Most 2017 Story of 2017
December 14th, 2017, 01:38 PM

Good tech gone bad! Nefarious nerds! Dubious online platforms! Predatory late capitalism! Imagine if every tech and business motif from the past 12 months gathered to celebrate an end-of-year reunion in a single story.

This is that story. It is the story of the Fingerlings and the Grinch bots.

We begin, as Christmas stories sometimes do, in a toy store. Every holiday season has its must-have gizmo, like Cabbage Patch Kids or Tickle Me Elmo. This year’s sensation is the Fingerling, a plastic five-inch-tall baby monkey. Engineered to cling to an outstretched finger with its plastic hands and feet, the toy giggles, burps, and farts in response to petting and shaking. Imagine a manic pygmy marmoset robot with minor gastrointestinal issues, and you get the picture.

Many years ago, in the days when malls ruled the world, adoring mothers and fathers fearing the wrath of a wanting child would storm into stores and shove each other across aisles to grab a toy like the Fingerling. These days, however, the battle royale over popular toys has shifted online, where the dangers are more exotic than a mother’s flying elbow.

The new holiday showdown pits humans against software. It’s not a fair fight. A fleet of bots—software programs that can automate activities like search, chat, and online ordering—have been dispatched by anonymous online scalpers to buy up the most popular children’s toys on the internet. These bots overwhelm retail sites with bulk orders from multiple IP addresses and autofill payment and address information faster than humanly possible. Hence, the apt nickname: Grinch bots.

Fingerlings are currently sold out at the websites of Toys “R” Us, Walmart, and Target. Where did the toys go? To sites like Amazon and eBay, where the bots’ owners are listing the $15 playthings for $1,000, or more. (It’s not clear who these people are, but they evidently possess programming chops, yet no soul.) Cyber scalpers have used the same methods to deplete online retailers of other toys, like Barbie Hello Dreamhouse and L.O.L. Surprise! Doll, which they can resell at exorbitant prices. While offline toy scalpers and online ticket scalpers are an old trend, this seems to be the first case of mass-scale online toy scalping.

Retailers have failed to block the bots, and platforms have refused to stop the sellers. For example, eBay has claimed that there’s little it can do to halt the legal exchange of toys. “As an open marketplace, eBay is a global indicator of trends in which supply and demand dictate the pricing of items,” the company said in a statement. “As long as the item is legal to sell and complies with our policies, it can be sold on eBay.” The Grinch bots are not technically stealing or defrauding. They are practicing a form of legally sanctioned ransom.

The yuletide fleecing of middle-class parents has attracted political attention. “Grinch bots cannot be allowed to steal Christmas, or dollars, from the wallets of New Yorkers,” Senator Chuck Schumer of New York said. He has proposed legislation that bans bots on retail sites, expanding a law that already prohibits the use of bulk-buying tickets for concerts or theater. That law’s name is the Better Online Ticket Sales Act—or the BOTS Act.

But even if fines make scalpers fear, the law won’t pass before this year. As Grinch bots reap and hoard playthings, ‘twill be too late for Fingerlings.

* * *

Why is this story so fitting for 2017? The Grinch bot drama mashes together two moral panics about once-celebrated tech stories—platforms and automation—and sprinkles them with dread about predatory capitalism. Beyond the nimbus of presidential scandal and the watershed revelations of sexual harassment, these fears have dominated the tech and business news cycles this year.

1. The Dark Side of Platforms

A platform is a digital interface that offers consumers access to a wide range of products, which the platform itself doesn’t necessarily own. Think Netflix for video, or Google for information. In a widely shared 2015 essay, Tom Goodwin, a writer and marketing strategist, summarized the spectacle of platforms tech this way:

Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening.

He was right: Something interesting is happening. But while Goodwin’s summary inspired sunny optimism back in 2015, the last 12 months have revealed the dark side of platforms, which often serve as clearinghouses of human indecency. Propaganda has thrived on Twitter, Google search results have elevated false breaking news stories, and Uber devised a controversial program called “greyball” to maneuver cars away from regulators trying to bust illegal ride-hailing. Most dramatically, a former executive at Facebook now claims the company is “ripping apart” society.

These scandals have not always damaged these companies’ utility or profit; while Uber’s valuation has declined, Facebook and Google’s stocks have grown dramatically. But they have pierced the prevailing techno-optimism by calling attention, again and again, to the same question: How can users trust platforms that are often no better than the worst of their users? That query has special resonance for families who are victims of today’s cyber scalpers. These high-tech scoundrels have scammed online retailers and turned the laissez faire rules of eBay’s platform against the interests of its shoppers. Like the Russian propagandists on Facebook and Twitter, the cyber scalpers succeeded, not by flouting their platforms rules, but by mastering them.

2. The Dark Side of Automation

Bots and artificial intelligence have been hailed as the next great technological breakthrough. They populate a vision of a future where corporate bots replace customer-service agents and where personal AI assistants help people shop or manage household tasks, like Her, or, less creepily, Jarvis. In this future, bots serve as automators of tedium and toil, allowing companies and individuals to focus on what really matters to them.

But in the last 12 months, bots have been mastered by trolls and scam artists. They have automated the worst elements of human nature—the instinct to deceive, ridicule, and extort. Immediately after the first presidential debate last year, more than a third of pro-Trump tweets (and about a fifth of pro-Clinton tweets) came from bots. Facebook and Twitter were flooded with bots that, in mimicking the most obnoxious users, merely amplified the sites’ worst tendencies. These “bot-makers see an opportunity to exploit anonymity with a humanlike touch at an inhuman scale,” John Herrman wrote for The New York Times.

It is a perfect description for the Grinch bot programmers, as well. Scalping is an ancient practice. But cyber scalping allows these scammers to operate at an inhumanly vast scale and with inhuman speed, so that they can absorb the entire supply of popular toys at Walmart and Target’s websites.

3. The Predation of “Late Capitalism”

Merriam-Webster’s word of the year is feminism—a worthy selection. But in economic circles, perhaps no term has been more emblematic of 2017 than the ubiquitous yet amorphous “late capitalism.”

The concept sounds vaguely Marxist. But it wasn’t coined by Karl Marx himself, according to William Clare Roberts, a political scientist at McGill University interviewed by The Atlantic’s Annie Lowrey. Rather, he said, the term came from Marxist acolytes alluding to the darkness that comes just before the dawn of socialism, the moment when “we see the ligaments of the international system that socialists will be able to seize and use.”

It’s hard to imagine a better advertisement for switching economic systems than anonymous scalpers ripping off well-intentioned parents in the name of free markets. But that’s essentially the attitude of the Grinch-bot coders and their ilk. Last year, two brothers bought a stockpile of Hatchimals—the it-toy of 2016—to force families to pay large markups to get the toy. It was like an analog version of the Grinch-bot scandal. Interviewed by Time magazine, the brothers were remorseless; in fact, they were proud. “We didn’t break any laws,” one brother, Mike Zappa, said. “And we aren’t dictating how the market is pricing the toys on eBay. What we are doing is capitalism at its best.”

It’s a shameless defense. But it’s not so different from the argument lurking in eBay’s corporate statement, which implies Grinch bots aren’t a scandal, because their behavior is technically legal. Indeed, that makes a fine summary for the worst storylines of the year, from politics to tech to business. Sometimes, the scandal is what’s permissible.

The Environmental Cost of Internet Porn
December 13th, 2017, 01:38 PM

Online streaming is a win for the environment. Streaming music eliminates all that physical material—CDs, jewel cases, cellophane, shipping boxes, fuel—and can reduce carbon-dioxide emissions by 40 percent or more. Video streaming is still being studied, but the carbon footprint should similarly be much lower than that of DVDs.

Scientists who analyze the environmental impact of the internet tout the benefits of this “dematerialization,” observing that energy use and carbon-dioxide emissions will drop as media increasingly can be delivered over the internet. But this theory might have a major exception: porn.

Since the turn of the century, the pornography industry has experienced two intense hikes in popularity. In the early 2000s, broadband enabled higher download speeds. Then, in 2008, the advent of so-called tube sites allowed users to watch clips for free, like people watch videos on YouTube. Adam Grayson, the chief financial officer of the adult company Evil Angel, calls the latter hike “the great mushroom-cloud porn explosion of 2008.”

Precise numbers don’t exist to quantify specifics, but the impression across the industry is that viewership is way, way up. Pornhub, the world’s most popular porn site, provides some of the only accessible data on its yearly web-traffic report. The first “Year in Review” post in 2013 tabulated that people visited the site 14.7 billion times. By 2016, that number had almost doubled, to 23 billion, and those visitors watched more than 4.59 billion hours of porn. And Pornhub is just one site.

Is pornography in the digital era leaving a larger carbon footprint than it did during the days of magazines and videos? Obtaining raw numbers will always be a sticking point, because the stigmatized industry has never kept track of sales like the music and film industries, and has no significant archives. But if pornography experts’ estimates are accurate, they suggest a rare scenario where digitization might have increased the overall consumption of porn so much that the principal of dematerialization gets flipped on its head. The internet could allow people to spend so much time looking at porn that it’s actually worse for the environment.

* * *

Using a formula that Netflix published on its blog in 2015, Nathan Ensmenger, a professor at Indiana University who is writing a book about the environmental history of the computer, calculates that if Pornhub streams video as efficiently as Netflix (0.0013 kWh per streaming hour), it used 5.967 million kWh in 2016. For comparison, that’s about the same amount of energy 11,000 light bulbs would use if left on for a year. And operating with Netflix’s efficiency would be a best-case scenario for the porn site, Ensmenger believes.

Grayson says he has witnessed this explosion of growth firsthand at Evil Angel. He estimates that the site’s viewership has increased by 7,000 percent since the time of DVDs. In the late 1990s, he says, a new Evil Angel DVD would sell approximately 7,500 copies in the first 30 days. Now, he says, Evil Angel videos are streamed 30,000 times in the first 30 days—and that only represents the 5 percent of its web traffic comprised of paying customers. Each week, 2 million free previews are watched. “There’s no way, 15 years ago, at the peak of physical media, that many people were touching our brand,” he says.

Still, it’s impossible to access any data for the porn industry as a whole. Trade magazines like Variety or Billboard don’t exist, and sales records have never been archived. For Jon Koomey, a data scientist who studies the environmental impact of the internet, this lack of information hamstrings any serious inquiry. Although the estimates sound reasonable to him, and he believes pornography very well could provide an exception to the rule of dematerialization, he warns against speculative comparisons. “I don’t even know what fraction of the internet is porn,” he says. “And without data, it’s hard to say anything sensible.”

Koomey warns that there are simply too many variables to be considered. For instance, the growth of porn consumption since the turn of the century would have to be compared to the growth of all internet data during the same time period. The energy and emissions for manufacturing, marketing, transporting, and using porn DVDs would have to be compared to the electricity required to make a search-engine query, the electricity used by the device making the search, and the operational cost of the website’s server, network, and specific data center.

Gail Dines, a sociologist who studies pornography, agrees that precise numbers would be impossible to find. But as an anti-pornography advocate, she views the potential environmental costs of such rabid online consumption as an important critique against the industry. She is sure that online pornography is much more popular, and attributes this growth to what she calls the principle of the “three As”: affordability, accessibility, and anonymity. “The more anonymous you make porn, the more affordable, the more accessible, the more you drive demand,” she says.

In her view, each new technology heightens the three As. Mobile phones, which can be viewed anywhere, are more private than desktop computers, DVDs, and VHSs, which must be viewed in a home. Those, in turn, are more private than an adult theater. Consumption has also become more anonymous as tube sites like Pornhub require no log-in or credit-card information. There is no fear of being seen by a neighbor at a sex shop.

* * *

All the researchers I spoke to would love to have access to reliable data. The sociologist Chauntelle Tibbals believes in the educational benefits of pornography, but she has qualms with the industry’s exploitative practices, and therefore has misgivings about using numbers provided by Pornhub. She notes that Pornhub is part of a vast porn empire called MindGeek, which quietly controls almost all of the free tube sites and an increasing number of production companies. Tibbals believes Pornhub releases these numbers—and engages in promotional activities like a recent offer of free snow removal in Boston—as an attempt to normalize itself and to shift the focus away from rampant piracy issues and accusations of promoting sexual violence against women. (Pornhub did not respond to a request for an interview.)

Although their numbers could be accurate, Tibbals believes trusting them without access to company records would be naïve, akin to trusting numbers published in brochures by companies like Goldman Sachs or Exxon. For that reason, she says a huge asterisk must be placed beside them in any serious effort to comprehend their impact. It’s possible Pornhub’s data is not reflective of the adult industry, but only of adult piracy sites.

Ensmenger, the Indiana University historian, agrees that the numbers are nebulous at best. But like Dines, he still thinks these questions are worth asking, even if only to raise awareness that internet porn does take an environmental toll. While Pornhub may be using an enormous amount of electricity, “none of us are paying that electrical bill in any way that impacts our behavior,” he says.

For Ensmenger, this epitomizes the problem with the digital economy, where so many of the costs are outsourced or hidden that consumers believe everything is free. Most sites offer their free videos by selling advertising to companies that track consumer behavior, and these cookies require a considerable amount of energy. More importantly, consumers don’t have to think about the significant environmental costs of constructing and destructing electrical products, such as screens, servers, and hard drives.

A notion like dematerialization, Ensmenger says, can often be a myth that Silicon Valley tells itself, without acknowledging that the region contains the country’s highest number of EPA Superfund sites, where the federal government must clean up hazardous pollutants and contaminants.

Even if consumers don’t pay the electric bill, somebody must. “With digital things, it’s just so easy to externalize the costs to other places, other actors, that we make assumptions about them being less environmentally impactful that are just not justified,” Ensmenger says.

Will Ukraine Be Hit by Yet Another Holiday Power-Grid Hack?
December 13th, 2017, 01:38 PM

The holiday season has not been a joyful time with respect to Ukraine’s power grid. Days before Christmas in 2015, remote hackers wrested control from Ukrainian grid operators, and, by digitally commandeering substations, shut off power for 225,000 customers for several hours. Then, in mid-December of last year, hackers developed a malicious code that, without any real-time human support, disrupted a Kiev transmission station and caused a substantial blackout that lasted roughly an hour in the capital—in the first fully automated grid attack ever seen.

With the holidays approaching again, the eyes of security experts and diplomats are on the energy companies in Ukraine and on the teams, believed to be based in Russia, that are responsible for the attacks. Researchers have linked these groups to the infiltration of energy companies in the United States and Europe. Experts are watching this month with concerns over safety in Ukraine and over the significant implications such an attack would have worldwide, including in the U.S.

Some evidence has already suggested that a new attack could be in the works. Robert Lee, the CEO and founder of the industrial-cybersecurity firm Dragos and a leader in analyzing both of the Ukraine grid attacks, says that in recent weeks he has observed an unusual spike in activity in Ukraine by the same group of developers who engineered the malware used in the 2016 attack. From last year’s attack until mid-November, Dragos had registered very little activity in Ukraine by the group, Lee says. “In our assessment, it would be completely reasonable to execute an attack this month,” he warns.

It’s possible that this spike in activity could be reconnaissance, preparation for a later operation, or simply an intention to create fear of a forthcoming hack. Michael Assante is the director of industrials and infrastructure at the cybersecurity-focused SANS Institute and a lead investigator of the 2015 attack. He says that, given the continuous and sustained access campaigns in the Ukraine—which have occurred against the backdrop of the clash in Eastern Ukraine that resulted from Russia’s annexation of Crimea, in 2014—it is unclear if an attack is being readied. “The attackers could launch an attack if they believed an attack served a purpose and felt that the risk of being foiled was low enough to proceed,” he says.

Now American officials are on the lookout for any features of a 2017 attack in Ukraine that could spell trouble if a nation-state were to focus its efforts on the high-risk target of the United States—perhaps in case of a war, when the norm against attacking infrastructure slackens.

Indeed, past attacks on Ukraine have informed officials’ understanding of the national-security threats to the U.S. For more than a decade leading up to the 2015 Ukraine attack, officials and diplomats discussed the possibility of an attack on infrastructure, according to Chris Painter, who led the State Department’s international cyberpolicy and diplomacy efforts from 2011 until this fall. “This is not a new thing on our radar, but we’ve actually seen it coming of age and happening, which has raised the alarm bells,” he says, characterizing such an attack on the United States as a low-probability but high-impact event. “We are in a new era where we will see more of these. It has gone from theoretical to more doable and practical.”

As Herb Lin, a cybersecurity scholar at Stanford University, points out, an attack in the United States of limited duration and scope, such as the 2015 and 2016 Ukranian grid attacks, would be “annoying but tolerable,” akin to a typical, localized blackout. But watching the Ukrainian grid is of particular interest in the U.S. because past attacks may well have been for purposes of signaling, according to Chris Inglis, who served as the deputy director of the National-Security Agency from 2006 to 2014. These attacks were “done visibly and in a venue where the United States couldn’t react,” he says.

Indeed, Lee observed that a number of the capabilities that the developers behind the 2016 attack had engineered into malware were not ultimately deployed in the attack. “It looked more like a proof of concept or a test run than a final outcome,” he says. It was as if this grid attack on a non-NATO country was meant to show off capabilities that would frighten or deter other powers—which a defining analysis by the journalist Andy Greenberg in Wired suggests is an element of the campaign of cyberassaults on Ukraine.

A cyberattack on the U.S. grid would almost certainly require the backing and resources of a nation-state. Researchers have connected the hackers responsible to the Russian government, though Russia has denied allegations of hacking in Ukraine. And Lee has observed that the attackers function as a complex organization with multiple teams and specialties, like a company or an intelligence agency—with the 2015 attackers working as an operations team and the 2016 attackers as a development team. Russia has proved its willingness to use cybertools to meddle in the United States this year. Further, U.S. government officials expect more sophisticated and widespread cyberoperations from Russia, especially around the 2018 midterm elections.

“What worries me most about Russia is not its technology, but its audacity and their willingness to cross the line,” Inglis says. “They have proved themselves willing to do things that cross every definition of red line.”

Still, the capabilities deployed against Ukraine only mean so much for the United States. The U.S. power grid belongs to a diverse set of mostly private-sector owners, and much of it is heavily regulated. It would be more difficult to attack a grid of this complexity. At the same time, the U.S. grid is more digitally dependent. Where Ukraine was able to restore power within hours by reverting to analog operations, a heavy reliance on automation in the United States limits this recovery option. “I’d be concerned if, on the receiving side, we make the mistake of digitizing too much,” Inglis says. “The benefit of a manual backup showed itself [in Ukraine] as a feature as opposed to a piece of legacy. Right now, in the United States, there are some places with manual capabilities and others where there aren’t.”

Experts agree that power companies are making strides toward increasing the defensibility and readiness of the U.S. grid, but there is a ways to go. “We have certainly learned that current defenses should not be considered adequate when facing attackers who are experienced and equipped to target power systems,” wrote Assante, who has also worked in the leadership of American Electric Power and the North American Electric Reliability Corporation.

“We have to step up our game,” Painter ays. “Clearly there are malicious actors that want to mess with these systems, and I can’t say that we’ve done enough or that industry has done enough.”

Yet, a well-financed, imaginative adversary with the backing of a nation-state could seemingly could come up with any number of attacks on American systems (just as the United States can). For example, one high-value target in the United States would be large transformers, which enable the bulk of transmission of electricity. “They weigh hundreds of tons, cost millions of dollars, take months to build,” Lin says. A cyberattack on such transformers could result in power losses lasting for weeks or months if backup transformers were not in place—and they often aren’t. (Indeed, transformers are subject to threats outside the digital realm, and were the target of a California sniper attack in 2013.)

While the technical defense of each component of a power grid presents numerous challenges, defending a grid does not always come down to patching vulnerabilities. In the 2015 Ukraine attack, for instance, hackers did not engineer technically sophisticated tools. Instead, they used phishing emails and learned insider knowledge, executing legitimate operations but doing so to inflict damage. “This is less a technical issue—though there are serious technical challenges to be solved—than a people issue about cognizance, responsibility, and accountability,” Inglis says.

As they look to Ukraine this month, experts say it would be particularly concerning to see an attack affecting a larger area, spreading on autopilot, or lasting for more than a day. Of course, any potential second-order effects, such as loss of life, would raise the stakes—as would a domino effect in which a power outage also disrupted telecommunication or air-traffic systems.

And in the United States, officials are learning to live with uncertainty about the grid. “It is a fact of life that we could lose power for a couple of hours due to a foreign power,” Lee says. “We don’t have to panic about it, but we do need to come to terms with this reality while working to make it harder to achieve.”

A New Kind of Soft Battery, Inspired by the Electric Eel
December 13th, 2017, 01:38 PM

In 1799, the Italian scientist Alessandro Volta fashioned an arm-long stack of zinc and copper discs, separated by salt-soaked cardboard. This “voltaic pile” was the world’s first synthetic battery, but Volta based its design on something far older—the body of the electric eel.

This infamous fish makes its own electricity using an electric organ that makes up 80 percent of its two-meter length. The organ contains thousands of specialized muscle cells called electrocytes. Each only produces a small voltage, but together, they can generate up to 600 volts—enough to stun a human, or even a horse. They also provided Volta with ideas for his battery, turning him into a 19th-century celebrity.

Two centuries on, and batteries are everyday objects. But even now, the electric eel isn’t done inspiring scientists. A team of researchers led by Michael Mayer at the University of Fribourg have now created a new kind of power source that ingeniously mimics the eel’s electric organ. It consists of blobs of multicolored gels, arranged in long rows much like the eel’s electrocytes. To turn this battery on, all you need to do is to press the gels together.

Unlike conventional batteries, the team’s design is soft and flexible, and might be useful for powering the next generation of soft-bodied robots. And since it can be made from materials that are compatible with our bodies, it could potentially drive the next generation of pacemakers, prosthetics, and medical implants. Imagine contact lenses that generate electric power, or pacemakers that run on the fluids and salts within our own bodies—all inspired by a shocking fish.

To create their unorthodox battery, the team members Tom Schroeder and Anirvan Guha began by reading up on how the eel’s electrocytes work. These cells are stacked in long rows with fluid-filled spaces between them. Picture a very tall tower of syrup-smothered pancakes, turned on its side, and you’ll get the idea.

When the eel’s at rest, each electrocyte pumps positively charged ions out of both its front-facing and back-facing sides. This creates two opposing voltages that cancel each other out. But at the eel’s command, the back side of each electrocyte flips, and starts pumping positive ions in the opposite direction, creating a small voltage across the entire cell. And crucially, every electrocyte performs this flip at the same time, so their tiny voltages add up to something far more powerful. It’s as if the eel has thousands of small batteries in its tail; half are pointing in the wrong direction but it can flip them at a whim, so that all of them align. “It’s insanely specialized,” says Schroeder.

How an electric eel’s electrocytes work (Schroeder et al. / Nature)

He and his colleagues first thought about re-creating the entire electric organ in a lab, but soon realized that it’s far too complicated. Next, they considered setting up a massive series of membranes to mimic the stacks of electrocytes—but these are delicate materials that are hard to engineer in the thousands. If one broke, the whole series would shut down. “You’d run into the string-of-Christmas-lights problem,” says Schroeder.

In the end, he and Guha opted for a much simpler setup, involving lumps of gel that are arranged on two separate sheets. Look at the image below, and focus on the bottom sheet. The red gels contain saltwater, while blue ones contain freshwater. Ions would flow from the former to the latter, but they can’t because the gels are separated. That changes when the green and yellow gels on the other sheet bridge the gaps between the blue and red ones, providing channels through which ions can travel.

Here’s the clever bit: The green gel lumps only allow positive ions to flow through them, while the yellow ones only let negative ions pass. This means (as the inset in the image shows) that positive ions flow into the blue gels from only one side, while negative ions flow in from the other. This creates a voltage across the blue gel, exactly as if it was an electrocyte. And just as in the electrocytes, each gel only produces a tiny voltage, but thousands of them, arranged in a row, can produce up to 110 volts.

Schroeder et al. / Nature

The eel’s electrocytes fire when they receive a signal from the animal’s neurons. But in Schroeder’s gels, the trigger is far simpler—all he needs to do is to press the gels together.

It would be cumbersome to have incredibly large sheets of these gels. But Max Shtein, an engineer at the University of Michigan, suggested a clever solution—origami. Using a special folding pattern that’s also used to pack solar panels into satellites, he devised a way of folding a flat sheet of gels so the right colors come into contact in the right order. That allowed the team to generate the same amount of power in a much smaller space—in something like a contact lens, which might one day be realistically worn.

For now, such batteries would have to be actively recharged. Once activated, they produce power for up to a few hours, until the levels of ions equalize across the various gels, and the battery goes flat. You then need to apply a current to reset the gels back to alternating rows of high-salt and low-salt. But Schroeder notes that our bodies constantly replenish reservoirs of fluid with varying levels of ions. He imagines that it might one day be possible to harness these reservoirs to create batteries.

Essentially, that would turn humans into something closer to an electric eel. It’s unlikely that we’d ever be able to stun people, but we could conceivably use the ion gradients in our own bodies to power small implants. Of course, Schroeder says, that’s still more a flight of fancy than a goal he has an actual road map for. “Plenty of things don’t work for all sorts of reasons, so I don’t want to get too far ahead of myself,” he says.

It’s not unreasonable to speculate, though, says Ken Catania from Vanderbilt University, who has spent years studying the biology of the eels. “Volta’s battery was not exactly something you could fit in a cellphone, but over time we have all come to depend on it,” he says. “Maybe history will repeat itself.”

“I’m amazed at how much electric eels have contributed to science,” he adds. “It’s a good lesson in the value of basic science.” Schroeder, meanwhile, has only ever seen electric eels in zoos, and he’d like to encounter one in person. “I’ve never been shocked by one, but I feel like I should at some point,” he says.

A Birth Certificate is a Person’s First Possession
December 11th, 2017, 01:38 PM

A recent controversy over birth certificates in Arkansas demonstrates that these slips of paper are imbued with political and social meaning. In 2015, a married couple, Marisa and Terrah Pavan, had their first child, who was conceived through sperm donation. The Arkansas Department of Health, or ADH, listed only Terrah, who gave birth to their daughter, on the baby’s birth certificate. This was contrary to state law, under which the spouse of the birth mother also is automatically listed.

The case went to the Supreme Court, which ruled that same-sex couples must receive the same legal treatment as different-sex ones. When an Arkansas circuit-court judge later ruled that the ADH must comply, this triggered a brief crisis Friday. Until the state ended its practice, now considered discriminatory, no newborns were allowed to be issued birth certificates. The governor ordered the ADH to meet the Supreme Court standard. After a few hours, the agency relented. Both Pavans can now finally be named on their child’s birth certificate.

This may have been a blink-and-you-might-miss-it news story for many outside Arkansas. But it’s only the latest example in a long history of using birth certificates to make governmental and social statements about identity and relationships.

* * *

Birth registration has long been useful to governments, allowing them to tax, conscript, and count the population. That effort was traditionally the purview of churches, where the practice dates to the 16th century. It wasn’t until 1837, in England and Wales, that birth registration began to become standardized and subject to governmental control.* Several decades later, it was mandatory to register all births in the two countries.

Today, most countries require birth certificates to be issued within a certain period after birth. The UN Convention on the Rights of the Child also enshrines the right for all children to have their births registered.

For that reason, the birth certificate becomes the first object most people own. Bound up in official identity and personal relationships, its stakes are high. Doubting the accuracy or provenance of a birth certificate can cause shock waves that ripple out to years or even decades later. The “birther” conspiracy during Barack Obama’s first presidential candidacy is a notable example. Donald Trump and others claimed that Barack Obama hadn’t been born in the United States and thus was ineligible for the presidency. The birth certificate is a battleground for debates about parentage, gender, identity, and governmental responsibility.

For those living in countries where birth certificates are rarely seen, it might seem like the document is a relic of an earlier time more obsessed with filial legitimacy. But birth certificates are still essential to basic citizenship rights across the world. For one thing, they often serve as a stepping stone to other identification documents, such as social-security cards and passports.

Elsewhere, the document has more diverse purposes. In Sudan, a birth certificate has to be presented before a child can enroll in school. In the United States, birth certificates are required to register with Native American tribes. In Sri Lanka, a birth certificate makes it possible to respect a minimum age of criminal responsibility; without it, police may informally estimate (and possibly exaggerate) children’s ages. In the United Kingdom, birth certificates help establish eligibility to join the armed forces.

But it’s not always easy to find the document. That situation is especially acute for displaced people. Birth certificates are important for family reunification. Children born in refugee camps face the prospect of becoming stateless if they can’t prove their parentage, and thus where they can claim nationality. This is tricky for Syrians in Turkey, for instance, as Syrian citizenship passes down through the father, and details of the father aren’t always known.

Worldwide, almost a third of all births aren’t registered. Nationally, registration rates vary widely, from 3 percent in Somalia to 100 percent in Bhutan. In some cases, that’s because birth certificates don’t seem useful to parents. For someone who lives in a remote part of a country that doesn’t provide any obvious citizenship benefits, there’s not much of an incentive to bother with registering a child.

There are also practical barriers. Many countries lack the technology or capacity to register each birth, even if doing so is mandated by law. And some countries only register babies born to married parents. Even if they want to register, some parents might not be able to if they cannot afford to travel to a location where births are registered or if they cannot cover the cost of issuing the certificate itself. There are also concerns that governments will misuse registration records eventually, whether for prejudicial policy, compulsory military service, or even ethnic cleansing. In the Soviet Union, “Jewish” was one of the 69 nationality options on birth certificates. Designating Soviet citizens as Jewish enabled discrimination against them, such as by limiting which colleges they could attend.

Some logistical hurdles have been tackled with more funding and resources to vital registration systems, or by making registration more convenient for parents. For instance, Tanzanian parents can register their children via text message, and mobile clinics in Indonesian villages bring birth certificates to new parents. The governance and trust issues, however, are more challenging to address.

* * *

Sometimes the state sees, and documents, its citizens differently than how they see themselves. Gender identity is one such area. In some places, transgender advocates have come under fire for proposals to make gender markers and names optional or amendable on birth certificates.

Overall, legal changes relating to birth certificates show how quickly the law is catching up to social attitudes about sex and gender. In the early 2000s, intersex individuals in Australia sought and received replacement birth certificates that left gender unspecified. But these documents were issued on an ad-hoc, retroactive basis. Later laws, such as one passed in Germany in 2013, officially allowed parents to leave the gender box on birth certificates blank.

In the Philippines, the gender on a birth certificate can be changed in the case of “a clerical or typographical error.” A hard-won legal precedent for changing the gender designation for identity reasons also exists. In a 2008 ruling of the Supreme Court of the Philippines, Jennifer Cagandahan obtained the right to change the name and gender on her birth certificate.

But these examples are outliers. In some places, gender markers on birth certificates can only be changed following gender-reassignment surgery. In many others, even that option is disallowed. This creates a situation where an adult whose gender identity isn’t reflected on their birth certificate might also be stuck with their birth-certificate gender on all the other official identity markers that are derived from it, such as ID cards and passports.

Other types of identity point out how arbitrary certain labels and designations are. One of these is race, which is marked on some birth certificates. Vivian Morris was born in 1969 in Montgomery, Alabama—the town where Rosa Parks famously refused to move to the back of the bus. On Morris’s birth certificate, her Korean mother was listed as white. At the time, there weren’t standard racial categories to choose from, and registration officials had more say. Morris tells me, “I always assumed that she was lumped in with white in Alabama, versus black, because those were the only recognized races back then in the Deep South.” Her experience exposes the gaps between bureaucratic permissibility and the complexity of racial identity. Those gaps haven’t fully closed since Morris’s childhood either: Some U.S. states still don’t allow multiracial children to be marked as such on their birth certificates.

* * *

Also contested are attempts to reflect changing notions of family headship on birth certificates. Japan’s koseki system, which oversees birth, death, and marriage registration, requires all members of a family to bear the same surname. In practice, this system prevents women from retaining their own last names. Around the world, some people whose last names are different to their children’s travel with birth certificates to prove their relationship.

Birth certificates also demonstrate the prevalence of female-headed households. In Jamaica, the father’s name is not listed on a third of birth certificates. In Kenya, until fairly recently, it was customary to write “XXXX” in place of the father’s name for children born to unmarried parents. The term “legitimate” was finally removed from American birth certificates in 1979.

Sometimes birth certificates bear witness to confused conceptions of parentage and legitimacy. England and Wales, for instance, have a dizzying set of rules about when the surname can be changed on the birth certificate, related to parental marriage status, who was present at registration, and what surname the child takes. However, in recognition of same-sex relationships, a U.K. birth certificate can list two mothers and no father. A birth certificate in Argentina can now list two mothers and a father.

These documents also show the extent of progress when it comes to gender relations. Just a few decades ago, U.S. birth certificates listed the father’s occupation but not the mother’s. There was no expectation of working women, at least officially.

* * *

Gender, race, and even date of birth aren’t the only areas where identity, as officially stated on a birth certificate, has been shown to be mutable. Following adoption, birth parents’ names on U.S. birth certificates may be replaced with adoptive parents’ names.

This happened to Rachel Zients Schinderman, whose father died when she was four. As an adult she was adopted by her stepfather, which triggered the reissue of her birth certificate to replace her father’s name with her stepfather’s. This was an emotional experience for Schinderman. “No one could take my real father away from me, and someone else wanted to be there for me too,” she tells me. Even so, the result strikes her as uncanny. “It is very strange to see [my stepfather’s] name there and the age he would have been at the time of my birth.” Schinderman understands why birth certificates get reissued upon adoption, but feels alienated by the bureaucratic requirement for such a change. “I just wish I had the option,” she says.

Schinderman isn’t alone in wanting this choice. There are fierce debates in adoption and genealogy circles over the sealing of original birth certificates when amended birth certificates are generated. Some argue that adoptees deserve to have access to their original documents, and that these should remain immutable records of biological parentage. Others point to the need for privacy for birth parents and respect for adoptive parents.

The legal skirmishes over who should be able to see a birth certificate, and what information it should contain, seem likely to amplify rather than diminish. As technology improves and legal frameworks for parenting continue to evolve, new controversies are bound to play out over birth certificates new and old. Will sperm donors, egg donors, surrogates, and others be reflected? Will these documents allow for more than three people to be named as parents? Will increasingly sophisticated biometrics be embedded into them?

Whatever the future holds for birth certificates, it’s clear that they’ll continue to matter not just for administrative purposes, but for emotional reasons, too. As Schinderman puts it, “even though the birth certificate is just a piece of paper, it is my piece of paper.”


This post appears courtesy of Object Lessons.

* This article previously misstated the year that birth registration was standardized in England and Wales. We regret the error.

A Viral Short Story for the #MeToo Moment
December 11th, 2017, 01:38 PM

Recent months make it seem like humanity has lost the instruction manual for its “procreate” function and has had to relearn it all from scratch. After scores of prominent men have been fired on sexual-assault allegations, confusion reigns about signals, how to read them, and how not to read into them. Some men are wondering if hugging women is still okay. Some male managers are inviting third parties into performance reviews in order to avoid being alone with women. One San Francisco design-firm director recently said holiday parties should be canceled, as The New York Times reported, “until it has been figured out how men and women should interact.”

Into this steps “Cat Person,” a New Yorker fiction story by Kristen Roupenian that explores how badly people can misread each other, but also how frightening and difficult sexual encounters can be for women, in particular. “It isn’t a story about rape or sexual harassment, but about the fine lines that get drawn in human interaction,” Deborah Treisman, The New Yorker’s fiction editor, told me.

This weekend, the story went unexpectedly viral. Or, perhaps, in this #MeToo moment, it went expectedly viral, by revealing the lengths women go to in order to manage men’s feelings, and the shaming they often suffer nonetheless. A New Yorker spokeswoman said via email that of all the fiction the magazine published this year, “Cat Person” was the most read online, and it’s also one of the most-read pieces overall in 2017.

Treisman said that while she was not looking for a story that touched on topical issues of sexual agency specifically, when this piece came in, she did hope to get it into the magazine “sooner rather than later.”

The piece—which you can read here if you haven’t already and save yourself both spoilers and holiday-party alienation—follows a 20-year-old college student named Margot as she goes on a date with an older man, Robert, then breaks things off with him. And while it’s fiction, for many women, it felt a little too real.

In the piece, Margot comes off as polite, a little narcissistic, and more than a little confused. Like most young daters, she relies primarily on Robert’s short texts to divine his personality. And Robert is a creepy enigma who nevertheless does nothing technically wrong, until the end of the piece.

At one point, Margot goes over to Robert’s house (willingly) and (presumably) to have sex. And then, she experiences this emotion:

It wasn’t that she was scared he would try to force her to do something against her will but that insisting that they stop now, after everything she’d done to push this forward, would make her seem spoiled and capricious, as if she’d ordered something at a restaurant and then, once the food arrived, had changed her mind and sent it back.

What is the word for this emotion? It’s not quite regret, because you haven’t done anything yet. It’s not quite disinterest, because, well, you’re at his house, aren’t you? Is it guilt? More importantly, if she feels so uneasy, why is she going ahead with it? Is she just afraid to be rude? Is it out of self-protection? What are we to make of a sexual encounter that is technically consensual, but which Margot still considers to be “the worst life decision” she’s ever made?

In the recent powerful-man purge, and in the rape-on-campus crisis before that, there’s been a reckoning over the true meaning of consent. Some have questioned whether women who get drunk, go to men’s dorms, and even initiate intercourse could later have a genuine claim of sexual assault. Margot was at his house, wasn’t she? To some women, this passage in the story underscored the importance of the “enthusiastic” part of the new “enthusiastic consent” standard.

Treisman said she hopes the piece might make people, “stop and consider what’s driving them in any given encounter of a romantic kind ... I think the fact that it’s generated this conversation has been a healthy thing.”

After the fact, Margot puts off rejecting the man by saying she’s busy. In a follow-up article, Roupenian explains how she was getting at the pressure women face to exit unwanted romantic situations gracefully:

She assumes that if she wants to say no she has to do so in a conciliatory, gentle, tactful way, in a way that would take “an amount of effort that was impossible to summon.” And I think that assumption is bigger than Margot and Robert’s specific interaction; it speaks to the way that many women, especially young women, move through the world: not making people angry, taking responsibility for other people’s emotions, working extremely hard to keep everyone around them happy. It’s reflexive and self-protective, and it’s also exhausting, and if you do it long enough you stop consciously noticing all the individual moments when you’re making that choice.

Margot’s initial attempts at gentleness don’t spare her Robert’s wrath in the end—another twist that’s all too common. A few years ago, I interviewed women who were prolific online daters. In their interactions with men on these apps, one-word replies were sometimes seen as binding international treaties specifying that shipments of sex were on the way:

A man ... had sent her the same OkCupid line three times in the course of a month, asking her if she’d like to chat. After ignoring it repeatedly, Tweten finally wrote back, “No.”

His response: “WHY THE FUCK NOT? If you weren’t interested, you shouldn’t have fucking replied at all! WTF!”

Perhaps it’s no surprise that there is already a Twitter account devoted to men criticizing the story for being too critical of the man, or too fat-shaming, or too confusing, or, um, too long. (It’s The New Yorker, my friend.)

No sooner has Margot imagined one day having a partner who would laugh and sympathize with her about the misbegotten Robert date than she thinks “no such boy existed, and never would.” It is remarkably difficult for women to talk to our romantic partners about what, exactly, it’s like for us out there. Much like the recent wave of sexual-assault scandals has served as an introduction, for men, to women’s heretofore private hell, “Cat Person” captured and explained the low-level dread that often accompanies romance for women—even the consensual kind.

Its deft portrayal of a near-universal sequence—the fear that your date might hurt you, the fear of hurting him first, the hurt that comes anyway after you spurn him—has sent it bouncing around the internet. It has women saying, in other words, “Yeah, us too.”

How Russia Hacked America—And Why It Will Happen Again
December 11th, 2017, 01:38 PM

During the 2016 presidential campaign, Russian hackers attacked the U.S. on two fronts: the psychological and the technical. Hackers used classic propaganda techniques to influence American voters, bought thousands of social media ads to propagate fake news, and broke into Democratic party email servers to steal information.


And it won't be the last time. Russian-backed psychological cyber warfare will only get better, and its methods more sophisticated.


Robots Will Transform Fast Food
December 8th, 2017, 01:38 PM

Visitors to Henn-na, a restaurant outside Nagasaki, Japan, are greeted by a peculiar sight: their food being prepared by a row of humanoid robots that bear a passing resemblance to the Terminator. The “head chef,” incongruously named Andrew, specializes in okonomiyaki, a Japanese pancake. Using his two long arms, he stirs batter in a metal bowl, then pours it onto a hot grill. While he waits for the batter to cook, he talks cheerily in Japanese about how much he enjoys his job. His robot colleagues, meanwhile, fry donuts, layer soft-serve ice cream into cones, and mix drinks. One made me a gin and tonic.

H.I.S., the company that runs the restaurant, as well as a nearby hotel where robots check guests into their rooms and help with their luggage, turned to automation partly out of necessity. Japan’s population is shrinking, and its economy is booming; the unemployment rate is currently an unprecedented 2.8 percent. “Using robots makes a lot of sense in a country like Japan, where it’s hard to find employees,” CEO Hideo Sawada told me.

Sawada speculates that 70 percent of the jobs at Japan’s hotels will be automated in the next five years. “It takes about a year to two years to get your money back,” he said. “But since you can work them 24 hours a day, and they don’t need vacation, eventually it’s more cost-efficient to use the robot.”

This may seem like a vision of the future best suited—perhaps only suited—to Japan. But according to Michael Chui, a partner at the McKinsey Global Institute, many tasks in the food-service and accommodation industry are exactly the kind that are easily automated. Chui’s latest research estimates that 54 percent of the tasks workers perform in American restaurants and hotels could be automated using currently available technologies—making it the fourth-most-automatable sector in the U.S.

The robots, in fact, are already here. Chowbotics, a company in Redwood City, California, manufactures Sally, a boxy robot that prepares salads ordered on a touch screen. At a Palo Alto café, I watched as she deposited lettuce, corn, barley, and a few inadvertently crushed cherry tomatoes into a bowl. Botlr, a robot butler, now brings guests extra towels and toiletries in dozens of hotels around the country. I saw one at the Aloft Cupertino.

Ostensibly, this is worrying. America’s economy isn’t humming along nearly as smoothly as Japan’s, and one of the few bright spots in recent years has been employment in restaurants and hotels, which have added more jobs than almost any other sector. That growth, in fact, has helped dull the blow that automation has delivered to other industries. The food-service and accommodation sector now employs 13.7 million Americans, up 38 percent since 2000. Since 2013, it has accounted for more jobs than manufacturing.

These new positions once seemed safe from the robot hordes because they required a human touch in a way that manufacturing or mining jobs did not. When ordering a coffee or checking into a hotel, human beings want to interact with other human beings—or so we thought. The companies bringing robots into the service sector are betting that we’ll be happy to trade our relationship with the chipper barista or knowledgeable front-desk clerk for greater efficiency. They’re also confident that adding robots won’t necessarily mean cutting human jobs.

Steve Scott

Robots have arrived in American restaurants and hotels for the same reasons they first arrived on factory floors. The cost of machines, even sophisticated ones, has fallen significantly in recent years, dropping 40 percent since 2005, according to the Boston Consulting Group. Labor, meanwhile, is getting expensive, as some cities and states pass laws raising the minimum wage.

“We think we’ve hit the point where labor-wage rates are now making automation of those tasks make a lot more sense,” Bob Wright, the chief operations officer of Wendy’s, said in a conference call with investors last February, referring to jobs that feature “repetitive production tasks.” Wendy’s, McDonald’s, and Panera are in the process of installing self-service kiosks in locations across the country, allowing customers to order without ever talking to an employee. Starbucks encourages customers to order on its mobile app; such transactions now account for 10 percent of sales.

Business owners insist that robots will take over work that is dirty, dangerous, or just dull, enabling humans to focus on other tasks. The international chain CaliBurger, for example, will soon install Flippy, a robot that can flip 150 burgers an hour. John Miller, the CEO of Cali Group, which owns the chain, says employees don’t like manning the hot, greasy grill. Once the robots are sweating in the kitchen, human employees will be free to interact with customers in more-targeted ways, bringing them extra napkins and asking them how they’re enjoying their burgers. Blaine Hurst, the CEO and president of Panera, told me that his no-longer-needed cashiers have been tasked with keeping tabs on the customer experience. Panera customers typically retrieve their food from the counter themselves. But at restaurants where they place their orders at kiosks, employees now bring food from the kitchen to their tables. “That labor has been redeployed back into the café to provide a differentiated guest experience,” Hurst said.

How many employees, though, do you need milling about in the café? The early success of the kiosks suggests that, at least when ordering fast food, patrons prize speed over high-touch customer service. Will companies like CaliBurger and Panera see sufficient value in employing human greeters and soup-and-sandwich deliverers to keep those positions around long-term?

Steve Scott

The experience of Eatsa may be instructive. The start-up restaurant, based in San Francisco, allows customers to order its quinoa bowls and salads on their smartphone or an in-store tablet and then pick up their order from an eerie white wall of cubbies—an Automat for the app age. Initially, two greeters were stationed alongside the cubbies to welcome and direct customers. But over time, customers relied less frequently on the greeters, co-founder and CEO Tim Young told me, and the company now employs a single greeter in its restaurants.

The type of person who orders a grain bowl on an iPhone is perhaps content to forgo a welcoming human face. There may not be enough such people to sustain a business, however, at least not yet. Eatsa announced in October that it was closing its locations in New York City; Washington, D.C.; and Berkeley. Young told me that the problem was the food, not the technology, and that other restaurant chains are interested in deploying Eatsa’s model. The taco salad I ordered was pretty good, though, and, at $8, cheaper than the fare at many other salad chains. I wondered whether the problem wasn’t that Eatsa had crossed the fine line separating efficiency from something out of Blade Runner.

Less dystopian was the scene at Zume Pizza, in Mountain View, California, where I watched an assembly line of robots spread sauce on dough and lift pies into the oven. Thanks to its early investment in automation, Zume spends only 10 percent of its budget on labor, compared with 25 percent at a typical restaurant operation. The humans it does employ are given above-average wages and perks: Pay starts at $15 an hour and comes with full benefits; Zume also offers tuition reimbursement and tutoring in coding and data science. I talked with a worker named Freedom Carlson, who doesn’t have a college degree. She started in the kitchen, where she toiled alongside the robots. She has since been promoted to culinary-program administrator, and is learning to navigate the software that calculates nutritional facts for Zume pizzas.

Steve Scott

This has typically been the story of automation: Technology obviates old jobs, but it also creates new ones—the job title radiology technician, for example, has been included in census data only since 1990. Transitioning to a new type of work is never easy, however, and it might be particularly difficult for many in the service sector. New jobs that arise after a technological upheaval tend to require skills that laid-off workers don’t have, and not all employers will be nearly as progressive as Zume. A college education helps insulate workers from automation, enabling them to develop the kind of expertise, judgment, and problem-solving abilities that robots can’t match. Yet nearly 80 percent of workers in food preparation and service-related occupations have a high-school diploma or less, according to the Bureau of Labor Statistics.

The better hope for workers might be that automation helps the food-service and accommodation sector continue to thrive. Panera’s Hurst told me that because of its new kiosks, and an app that allows online ordering, the chain is now processing more orders overall, which means it needs more total workers to fulfill customer demand. Starbucks patrons who use the chain’s app return more frequently than those who don’t, the company has said, and the greater efficiency that online ordering allows has boosted sales at busy stores during peak hours. Starbucks employed 8 percent more people in the U.S. in 2016 than it did in 2015, the year it launched the app.

Of course, whether automation is a net positive for workers in restaurants and hotels, and not just a competitive advantage for one chain over another (more business for machine-enabled Panera, less for the Luddites at the local deli), will depend on whether an improved customer experience makes Americans more likely to dine out and stay at hotels, rather than brown-bagging it or finding an Airbnb.

That could be the case. James Bessen, an economist at Boston University School of Law, found that as the number of ATMs in America increased fivefold from 1990 to 2010, the number of bank tellers also grew. Bessen believes that ATMs drove demand for consumer banking: No longer constrained by a branch’s limited hours, consumers used banking services more frequently, and people who were unbanked opened accounts to take advantage of the new technology. Although each branch employed fewer tellers, banks added more branches, so the number of tellers grew overall. And as machines took over many basic cash-handling tasks, the nature of the tellers’ job changed. They were now tasked with talking to customers about products—a certificate of deposit, an auto loan—which in turn made them more valuable to their employers. “It’s not clear that automation in the restaurant industry will lead to job losses,” Bessen told me.

My experience with service bots was mixed. The day I visited the Aloft Cupertino, its robot butler was on the fritz. And when I asked Marriott’s new artificial-intelligence-powered chat system to look up my rewards number, it said it would get a human to help me with that. Neither interaction left me anticipating more-frequent hotel stays. As I wrote this column, however, Starbucks went from being a weekly splurge to a daily routine. The convenience of the app was difficult to pass up: I could place my order while on the bus and find my drink waiting for me when I got to the counter.

One day, I arrived at my local store to find that it had instituted a new policy requiring customers to retrieve mobile orders from a barista. (Apparently things can get a little hairy at the mobile-pickup station during rush hour at some stores.) I didn’t like the change; I’d grown accustomed to frictionless transactions. I started going to a different Starbucks location nearby, where I could pick up my coffee without the interference of a fellow human being.


This article appears in the January/February 2018 print edition with the headline “Iron Chefs.”

The Winter Getaway That Turned the Software World Upside Down
December 8th, 2017, 01:38 PM

Snowbird, Utah, is an unlikely place to mount a software revolution. Around 25 miles outside Salt Lake City, Snowbird is certainly no Silicon Valley; it is not known for sunny and temperate climes, for tech-innovation hubs, or for a surplus of ever eager entrepreneurs. But it was here, nestled in the white-capped mountains at a ski resort, that a group of software rebels gathered in 2001 to frame and sign one of the most important documents in its industry’s history, a sort of Declaration of Independence for the coding set. This small, three-day retreat would help shape the way that much of software is imagined, created, and delivered—and, just maybe, how the world works.

Whether or not you recognize its name, you’ve probably encountered Agile, or at least companies that use it. Representatives from Spotify and eBay confirmed that both companies currently use Agile, and there’s a job listing on Twitter’s website for an “Agile Coach.” Bread-crumb trails across the internet suggest that many other big-name technology companies have at least experimented with it in the past. And it’s not just Silicon Valley: Walmart reportedly began experimenting with Agile years ago. The Agile Alliance, a nonprofit that promotes the use of Agile, counts all sorts of corporate giants—including Lockheed Martin, ExxonMobil, and Verizon—among its corporate members.

Agile’s acolytes seem to be everywhere, bringing with them a whole nerd lexicon of tools and tricks to make workplaces more efficient: Think daily stand-ups and sprints. Taken at face value, it may seem like another meaningless corporate buzzword used by project-management types. But it’s actually a very specific philosophy, one that is outlined in the four-bullet, 68-word document signed at Snowbird.

* * *

Before software could eat the world, it needed to pull itself out of the deluge. Silicon Valley may be one of the only places in the world where the word “Waterfall” has a slightly negative connotation. In programming, Waterfall is used to describe a way of building software—think a slow, trickling, stage-by-stage process. Under Waterfall, the software project is rigorously designed up front, in the way that one might manufacture a wristwatch.

It worked something like this: Someone would dream up a piece of software they’d like built. Before so much as a line of code is written, the creators write out what they want built and how in a series of long, detailed plans. They craft what’s called a requirements document, where they outline everything they want the software to do. Projects then flow downstream, from stage to stage, team to team, until they reach completion. At the very end, the entire new piece of software is tested, given back to the customer, and sent out the door.

A Waterfall process (Courtesy of the Computer History Museum)

Many attribute the origin of this model to a 1970 paper by Winston W. Royce—but there’s a big catch: Though a Waterfall-like diagram appears on the second page, Royce’s paper doesn’t actually endorse building software that way.

A linear approach might work when you know exactly what you want to build, but it can be too restrictive for some projects—software development, as Michael A. Cusumano, the Sloan Management Review distinguished professor of management at MIT, puts it, is “really an invention process.” “Software engineers or programmers like to go back and forth across those different steps,” says Cusumano. “They’re not really sequential.”

And there’s a problem with waiting until the end of a project to test the whole thing, Cusumano points out: If you catch a bug at the last stage, it can be messy—or even fatal—to try to go back and fix it. Some software projects would get stuck and simply never ship.

“People would come up with detailed lists of what tasks should be done, in what order, who should do them, [and] what the deliverables should be,” remembers Martin Fowler, the chief scientist at ThoughtWorks, who attended the Snowbird meet-up. “The variation between one software and another project is so large that you can’t really plot things out in advance like that.” In some cases, the documentation itself grew to be unwieldy. A few of the people I spoke with shared horror stories: an entire bookshelf’s worth of requirements in binders or an 800-page document that had been translated across three different languages.

Another Snowbird participant, Ken Schwaber—the cofounder of Scrum and founder of Scrum.org—says Waterfall “literally ruined our profession.” “It made it so people were viewed as resources rather than valuable participants.” With so much planning done upfront, employees became a mere cog in the wheel.

As the pure sequential model faltered in the 1980s and early 1990s, some companies began experimenting with different ways to work through projects, creating processes that, as the former academic in the field of science and technology studies Stuart Shapiro says, allowed developers to “climb back up the waterfall.”

In a 1997 paper on Microsoft, Cusumano and his coauthor Richard W. Selby describe how Waterfall “has gradually lost favor ... because companies usually build better products if they can change specifications and designs, get feedback from customers, and continually test components as the products are evolving.”

Around the turn of the century, a few rogues in the software industry began really pushing back. They wanted to create processes that would give them more flexibility and actually allow them to ship software on time. Some of these processes, like Scrum and Extreme Programming (XP), were called “light” or “lightweight” processes. But no one subset had really caught on. So, in 2001, the lightweight guys decided to join forces.

“I think at that point, we were all sort of seeking legitimacy, that we’d sort of been all out on our own doing similar things, but it hadn’t really taken off big-time in the community,” remembers Jim Highsmith, an executive consultant at ThoughtWorks.

It’s unclear who came up with the idea for the meeting that would eventually take place at Snowbird. Many of the participants were leaders in the software community, and a few remember the idea being tossed around at industry meet-ups. When the invitation to the retreat finally arrived, it came via an email, from Bob “Uncle Bob” Martin. Martin, an industry veteran and the founder of Uncle Bob Consulting, runs The Clean Code Blog and has a perfect sense of nerd humor: A YouTube video embedded on his website features Martin, among other things, blasting off on an asteroid. Martin says he and Fowler met at a coffee shop in Chicago, where they composed and sent the email. Fowler doesn’t have any memory of their meeting, but says it’s “likely” it went down that way.

After running through a few options—like Chicago (“cold and nothing fun to do,” wrote Highsmith in 2001) and Anguilla (“warm and fun, but time-consuming to get to”)—the group settled on Utah (“cold, but fun things to do”), booking an excursion to Snowbird. There, beginning February 11, 2001, the men—and they were all men—would hit the slopes and talk software processes.

* * *

James Grenning, the founder of Wingman Software, remembers a blizzard. “We were snowed in,” he says, “and it was like avalanche conditions. No one was going to go anywhere. It was an amazing thing.”

There probably wasn’t a blizzard. Historical weather data from Dark Sky suggest there was some light snow in the days leading up to and during the retreat. But the weekend did—at least, metaphorically speaking—bury a lot of what came before it.

“I have been in many, many of these kinds of meetings throughout my career,” recalls Highsmith. “This one was really special.”

I spoke with 16 of the 17 attendees. (Kent Beck, a technical coach at Facebook, declined to be interviewed for this article.) Over a decade and a half later, they reflected on the retreat. “It was one of those things where you think, ‘You know, you’re gonna get a bunch of people in a room and they’re going to chitchat and nothing’s going to happen,’” says Martin. “And that’s just not what happened. This group of people arranged themselves, organized themselves, and produced this manifesto. It was actually kind of amazing to watch.”

Settled at Snowbird, the group began laying out what they had in common. Schwaber recalls, “When we compared how we did our work, we were just kind of astonished at the things that were the same.”

Unlike other historical documents, the Agile Manifesto doesn’t declare truths self-evident. Rather, it compares: We value this over that. This construction, some of the framers say, is one of the crucial features of the document. Of course, it’s unclear who came up with it, though several of the document’s framers have their theories.

Ward Cunningham, the cofounder of Cunningham & Cunningham (who is famous in the software community for, among other things, coining the term “wiki”), reflects on that moment. “When it was written down on that whiteboard, some people were out in the hallway on a break,” he recalls. “And I was out in the hallway, and [someone] said, ‘Come here, and look at this. Look at what we wrote.’ And we were just standing around looking at that whiteboard, in awe at how much that summarized what we held in common. It was such a dramatic moment, you know, that instead of everybody talking in small groups, we stood around that whiteboard and studied it.”

Cunningham says he jumped on a chair and took a picture of that moment “because I could tell that something profound had happened.”

So what is the Agile Manifesto? The preamble reads, “We are uncovering better ways of developing software by doing it and helping others do it.” It then lays out the four core values:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

The document concludes that “while there is value in the items on the right, we value the items on the left more.” Like any good founding document, the words can be interpreted differently, but the basic gist is this: Put people over process. Focus on making software that works, not documents about that software. Work with your client rather than fight over a contract. And along the way, be open to change.

The men finished the Manifesto during the retreat. They spent the rest of the time working on the 12 principles behind the new document and, well, skiing. (Some argue that the principles are considered an official part of the Manifesto; others consider the Manifesto itself just the values.)

This new philosophy needed a name, and not everybody was satisfied with the “Lightweight” working title. “It somehow sounds like a bunch of skinny, feebleminded, lightweight people trying to remember what day it is,” Highsmith recalls Alistair Cockburn, another participant, saying. Cockburn, who is today an independent software consultant, remembers facilitating the search for the word. “It wasn’t like somebody said, ‘Agile! Oh great, let’s go,’” Cockburn tells me. “It was really a lot of work.” The other finalist, he says, was “Adaptive.”

“The only concern with the term ‘agile,’” writes Highsmith in his 2001 summary of the retreat, “came from Martin Fowler (a Brit for those who don’t know him), who allowed that most Americans didn’t know how to pronounce the word ‘agile.’” Fowler eventually got over it—although, after speaking with him on the phone, I couldn’t help but notice that he still pronounces the word with a British accent: an elegant ah-gile, instead of the American ah-gel.

Many described the debates at Snowbird as surprisingly friendly, especially considering the intensity of the egos in the room. Cockburn recalls “immense generosity in the listening and the understanding for other people.”

But not all the participants remember everything so rosily: “The first day had quite a lot of alpha-male-type, status-posturing-type behavior,” Brian Marick, an independent programmer and author, recalls, “which made me pessimistic that much good would come out of the meeting.” Marick says he called his wife that first evening of the retreat, telling her, “There’s a powerful odor of testosterone in this room.”

Schwaber says the group did invite “a whole bunch of really pretty knowledgeable women” but that none showed. “They thought it would just be a carousing and smoking weekend,” Schwaber says. “They didn’t think we were going to do anything intellectual or productive.”

“That was a shame,” he says, “because there’s some people that would’ve been really helpful.” But it’s unclear whether women were, in fact, actually invited: A few of the framers tell me they vaguely remember some women being invited. Others don’t.

As the men left Snowbird, no one anticipated what happened next. “When I came down the mountain, which I was riding with a couple other Manifesto writers, I was thinking to myself, ‘I’m not sure anybody would pay any attention to this,’” recalls Mike Beedle, the CEO of Enterprise Scrum. “My feel was that it was sort of like a gamble. It was like a question mark. Who knows? I mean, maybe people will go to this website that we’re proposing putting up. Or maybe they won’t.”

* * *

They did. Unlike the ink-and-paper Declaration of Independence, the Agile Manifesto was born of the internet age. The final document is hosted online, on a simple website that feels straight out of the early 2000s, featuring a bunch of guys in khakis standing around a whiteboard. Cunningham, who continues to host the site, says he intended for people to print the document and hang it up as a poster. But for a decade and a half, the website provided something greater than cubicle art—it stood as a virtual, communal rallying cry. Site visitors were invited to sign on to the Manifesto and publicly add their names to the document.

“We put that thing up, and it just exploded,” says Dave “PragDave” Thomas, a coauthor of The Pragmatic Programmer and an adjunct professor at Southern Methodist University. “That site was actually a focal point, if you’d like, for people who want to say, ‘Yes, I agree with this.’ And I think that’s one of the reasons it took off.”

Marick agrees. “I think it was really the fact that people could vent their frustration by citing Martin Luther’s theses hammered to the door, and they could put their signature on it as well,” he says. “That was what really gave it momentum.”

The ability to sign the document ended in July 2016. (Cunningham reconfigured the hosting and is beginning to treat the Manifesto like a historical document.) But in the 15 years since it was first published, he says, more than 20,000 people signed the Agile Manifesto.

* * *

The Manifesto, of course, was only the beginning. “My gosh, I wish I could’ve been there,” Grady Booch tells me. Booch, the chief scientist of software engineering for IBM Research, was invited to the retreat at Snowbird, but says he bailed at the last minute in order to deal with a “pesky customer.” Booch doesn’t doubt Agile’s seminal origin or its subsequent impact. He tells me that the 1990s were “an incredibly rich time in development in software engineering, when you had literally dozens, if not hundreds, of people that were pioneering new ideas about software development.” All of that, he says, “came to a head” at Snowbird.

An Agile process (Courtesy of the Computer History Museum)

Unlike Waterfall, Agile emphasizes iterative development, or building software in pieces. Agile teams typically work in short cycles—which are called “sprints” in Scrum, today one of the most widely used forms of Agile—that usually last two weeks each. Booch argues that both Agile and Waterfall are valid approaches, but that different projects call for different methods—and it’s important to weigh factors like the project’s risk and the culture of the team that’s executing. “If I’m building a nuclear-power plant,” he says, “believe me, I don’t want to use incremental and iterative methods because testing failure is never a good thing; it’s kind of irreversible. On the other hand, if I’m building a throwaway app for some new clone of Tinder for goats, whatever it might be, then sure, I’m gonna put a few people in a room and go build this.”

Perhaps Agile, or something like it, was inevitable: If software projects were going to be successful in the nimble, digital-first future, they needed to be able to, as goes the tech parlance, pivot—to respond to changes. The web profoundly changed the way software is delivered. Today’s software isn’t typically burned onto a CD-ROM and stocked on a store shelf; updates can be pushed to your laptop or smartphone remotely. This makes it easier to add features or fix bugs after releasing the product.

“In order to succeed in the new economy,” Highsmith writes in his 2001 summary of the retreat, “to move aggressively into the era of e-business, e-commerce, and the web, companies have to rid themselves of their Dilbert manifestations of make-work and arcane policies. This freedom from the inanities of corporate life attracts proponents of Agile Methodologies, and scares the bejeebers (you can’t use the word ‘shit’ in a professional paper) out of traditionalists.”

But this isn’t just a software story. Today, teams across industries and around the world are “going Agile”—or, at least, using bits and pieces of the Agile philosophy. The document itself has been translated into over 60 different languages.

Cockburn believes that’s because what Agile “managed to decode was something about pure mental, team-based activities”—and that “it’s just an accident of history that it was the programmers who decoded this.”

Compared with “the more mainstream, more Waterfall-ish kind of ideas” that “lay out in great detail what everybody does,” Agile is “much more empowering to the individuals doing the work,” Fowler says. And, since it has been adopted by a spectrum of professions, Arie van Bennekum goes so far as to suggest changing the word “software” to “solutions,” in order to open up Agile to everyone.

Despite discussions over whether the Manifesto itself should be amended, many of the original signers see the document as a historical—not a living—document. “It’s like a Declaration of Independence in U.S. history,” says Cockburn. “You don’t go back and rewrite that.”

“I think those four bullet points are still as valid as ever,” says Grenning. “I don’t expect them to change.”

With the end of public signing, it seems unlikely that the Agile Manifesto will ever officially change, but that doesn’t mean there aren’t problems in the world of Agile. Over the course of our conversations, many of the framers expressed a frustration with modern Agile. On the heels of Agile the philosophy came Agile the industry: Agile software, Agile coaching, Agile trainings, and Agile conferences. There’s no shortage of ways you can spend money to try and make your business or team “Agile.”

But there’s a particular irony here: Agile is a philosophy, not a set of business practices. The four bullets outline a way of thinking, a framework for prioritizing all the complicated parts of a project. They don’t tell you what software to buy or how to hold your daily team meeting. “Now you can go to a conference, and there’s aisle after aisle of people who are selling you computer tools to run your process. And they say it’s Agile,” says Cunningham. He points to the first value of the Agile Manifesto. “It says, ‘Individuals and interactions over processes and tools.’ How did [Agile] become a process-and-tools business?”

Cunningham thinks that “other people saw dollar signs and wanted to do the dollar-sign thing.” He adds, “Money has been made.”

Van Bennekum, who is today a thought leader at Wemanity, says, “I see people being an Agile coach absolutely not knowing what they’re talking about,” which is “upsetting.”

Meanwhile, Jon Kern, the chief architect at Fire Planning Associates, admits he “kind of stepped out of the Agile ring”—exhausted after a lot of people just didn’t get it. “You get a lot of people, just snake-oil salesmen—folks that say they’re doing Agile when it’s Agile in name only,” he says. Kern compares Agile to yoga, arguing his practice is personal and that he doesn’t “try to tell other people how to practice.”

The monetization of Agile aside, the influx of nontechnical users has created some conflict. Martin maintains that the “most annoying aspect right now” is that Agile “has been taken over by the project-management people,” leaving “the technical people and the technical ideas” behind.

Jeff Sutherland​, a cocreator of Scrum and the CEO of Scrum, Inc., is frustrated by misreadings of the document within the software community itself. Sutherland says he sees teams in Silicon Valley that claim to be Agile, but are “not delivering working product at the end of a short iteration.” This, he says, puts them “in violation of the second value” of the Manifesto. “This kind of thing that most people are doing that they can’t get anything working in any reasonable time—that they claim is Agile because anybody can do whatever they want—is not consistent with the Agile Manifesto,” he points out.

A few have gone so far as to proclaim that Agile is dead. But Cockburn argues that there’s always some benefit to trying Agile, even if it’s not perfect: “Even badly done, Agile outperforms all of the alternatives, or at least, the main older alternative [Waterfall], which is out there.”

The intricacies—and passions—of these debates demonstrate just how big Agile has become. When I ask Kern what expectations he had leaving Snowbird all those years ago, he laughs. “For the 10th-[anniversary] reunion, we were trying to come up with some words to put on a T-shirt,” he explains, “and my proposal was: ‘Four measly bullets, and all this shit happened.’”

Google Taught an AI That Sorts Cat Photos to Analyze DNA
December 8th, 2017, 01:38 PM

When Mark DePristo and Ryan Poplin began their work, Google’s artificial intelligence did not know anything about genetics. In fact, it was a neural network created for image recognition—as in the neural network that identifies cats and dogs in photos uploaded to Google. It had a lot to learn.

But just eight months later, the neural network received top marks at an FDA contest for accurately identifying mutations in DNA sequences. And in just a year, the AI was outperforming a standard human-coded algorithm called GATK. DePristo and Poplin would know; they were on the team that originally created GATK.

It had taken that team of 10 scientists five years to create GATK. It took Google’s AI just one to best it.

“It wasn’t even clear it was possible to do better,” says DePristo. They had thrown every possible idea at GATK. “We built tons of different models. Nothing really moved the needle at all,” he says. Then artificial intelligence came along.

This week, Google is releasing the latest version of the technology as DeepVariant. Outside researchers can use DeepVariant and even tinker with its code, which the company has published as open-source software.

DeepVariant, like GATK before it, solves a technical but important problem called “variant calling.” When modern sequencers analyze DNA, they don’t return one long strand. Rather, they return short snippets maybe 100 letters long that overlap with each other. These snippets are aligned and compared against a reference genome whose sequence is already known. Where the snippets differ with the reference genome, you probably have a real mutation. Where the snippets differ with the reference genome and with each other, you have a problem.

GATK tries to solve the problem with a lot of statistics. DNA-sequencing machines sometimes make mistakes, so the GATK team studied where the machines tend to made mistakes. (The letters GTG are particularly error-prone, to give just one example.) They thought long and hard about things like “the statistical models underlying the Hidden Markov model,” per DePristo. GATK then gives its best guess for the actual letter at a certain location in DNA.

DeepVariant, on the other hand, still does not know anything about DNA-sequencing machines. But it has digested a lot of data. Neural networks are often analogized as layers of “neurons” that deal in progressively more complex concepts—the first layer might respond to light, the second shapes, the third actual objects. As DeepVariant is trained with data, it learns which connections between “neurons” to strengthen and which to ignore. Eventually, it can sort the actual mutations from the errors.

To fit the DNA-sequencing data to an image-recognition AI, the Google team came up with a work-around: Just make it an image! When scientists want to investigate a mutation, they’ll often pull up the aligned snippets, like so:

Google

“If humans are doing this as a visual task, why not present this as a visual task?” says Poplin. So they did. The letters—A, T, C, or G—got assigned a red value; the quality of the sequencing at that location a green value; and which strand of DNA’s two strands it is on a blue value. Together, they formed an RGB (red, green, blue) image.

Google

And then it was simply a matter of feeding the neural network data. “It changes the problem enormously from thinking super hard about the data to looking for more data,” says DePristo.

Between publishing a preprint about DeepVariant last December and the release this week, the team continued improving the tool. Instead of three layers of data—represented by red, green, and blue—at any location in the genome, DeepVariant now considers seven. It would no longer make any sense as an image to the human eye. But to a machine, what’s just a few more layers of numbers?

To be clear, DeepVariant itself is unlikely to change genetics research. It is better than GATK, but only slightly so—and it is half as fast depending on the conditions. It does, however, lay the groundwork for AI’s influence in future genetics research.

“The test will really be how it can translate to other technologies,” says Manuel Rivas, a geneticist at Stanford. New sequencing technologies like Oxford Nanopore are becoming popular. If DeepVariant can quickly learn variant calling under these new conditions—remember the humans took five years with GATK—that could speed up the adoption of new sequencing technologies.

DePristo says that the idea of layering data on top of each location in the genome could easily be applied to other problems in genetics—the more important of which is predicting the effects of a mutation. You might imagine layering on, for example, data on when genes are active or not. DeepVariant started off with just three layers of data. Now it has seven. Eventually it might be dozens. It won’t make much sense to a human brain anymore, but to an AI, sure.

A Unified Theory of Meme Death
December 7th, 2017, 01:38 PM

Memes aren’t built to last. This is an accepted fact of online life. Some of our most beloved cultural objects are not only ephemeral but transmitted around the world at high speed before the close of business. Memes sprout from the ether (or so it seems). They charm and amuse us. They sicken and annoy us. They bore us. They linger for a while on Facebook and then they die—or rather retreat back into the cybernetic ooze unless called upon again.

The constancy of this narrative may be observed in any number of internet memes in recent memory, from the incredibly short-lived (Damn Daniel, Dat Boi, Salt Bae, queer Babadook) to the ones seemingly too perfect to ever perish like Harambe the gorilla and Crying Jordan. The recent “Disloyal Man Walking With His Girlfriend and Looking Amazed at Another Seductive Girl,” the title of the stock image shot by photographer Antonio Guillem, just made the rounds a few months ago.

At a glance—even from a digital native—meme death seems like a much less mysterious phenomenon than meme birth. While tracing the origin of any individual meme requires a separate trip down the rabbit hole, it makes sense to assume that memes die because people get tired of them. Even as a concept such as “average attention span” is not incredibly useful to psychologists who study attention (different tasks require different attention strategies), there’s a general assumption that this number is shrinking. “Everybody knows” a generation raised on feeds and apps must have focus issues, and that assessment isn’t totally false. Our devices are “engineered to chip away at [our] concentration” in what’s called the “attention economy,” writes Bianca Bosker in The Atlantic, and apps such as Twitter keep us anxious for the next big thing in news, pop culture, or memes. Our overextended attention leads to an obvious explanation for meme death: We are so overstimulated that what brings us joy cannot even hold our focus for long. But is that really why memes die?

In 2012, the third and final meeting of ROFLCon, a biennial convention on internet memes hosted by the Massachusetts Institute of Technology, anticipated a shift in the formal qualities of meme culture, ushered in by social-media sites like Facebook and Twitter. Indeed, this was the tail end of an era, one defined by the once-ubiquitous image-macro template as applied to subgenres like Advice Animals, LOLcats, and Doge. At the conference, 4chan founder Christopher “moot” Poole was “wistfully nostalgic for the slower-speed good ol’ days,” Wired’s Brian Barrett reported, fearful that memes gone “mainstream” would betray the niche communities that considered memes a kind of intellectual property all their own.

“These days, memes spread faster and wider than ever, with social networks acting as the fuel for mass distribution,” Wired’s Andy Baio wrote that same year. “As internet usage shifts from desktops and laptops to mobile devices and tablets, the ability to mutate memes in a meaningful way becomes harder.” Both Poole and Baio suggest that memes lose something essential—whether a close-knit humor or the opportunity to add a unique, creative contribution—when they are enjoyed by a larger community. Social networks, some feared, would drive memes to extinction. But Chris Torres, the creator of Nyan cat, anticipated that the break from the old-school would be a good thing. “The internet doesn’t really need to have its hand held anymore with websites that choose memes for them,” he told The Daily Dot’s Fernando Alfonso III in 2014. “As long as there is creativity in this world then they are never going away. This may just be the calm before the storm of amazing new material.”

And in 2017, it’s clear that the doomsday crew vastly underestimated internet users’ creativity. Increased mobility and access across platforms and communities has brought to the surface some of the funniest and weirdest content the web has ever known. Contrary to what Poole and Baio implied, weird humor and memes are hardly the exclusive domain of Redditors or the mostly white tech bros who populated ROFLCon. Today, many of the internet’s favorite memes come from fringe or ostracized communities—often from black communities, for whom oddball humor has long been an art form.

* * *

While internet memes categorically remain alive and well, individual memes do seem to die off faster than in Poole’s “good ol’ days.” They just don’t last like they used to: Compare the lifespans of say, Bad Luck Brian to Arthur’s clenched fist or confused Mr. Krabs. But if overexposure is partially to blame for their demise, it certainly doesn’t tell the whole story. Nor can it alone account for the varied lifespans amongst concurrent memes. Crying Jordan lasted years; did Damn Daniel even last two weeks? Salt Bae took over social media in January 2017, but was quickly overshadowed by gifs of Drew Scanlon (“white guy blinking”) and rapper Conceited (“black guy duck face”), which lasted throughout the spring.

Why do some memes last longer than others? Are they just funnier? Better? And if so, what makes a meme better? The answer lies not in traditional memetics, but in the study of jokes.

Though he has yet to return to the subject in earnest since 1976’s The Selfish Gene, Richard Dawkins remains a specter over discussions of internet memes. The study, in which Dawkins extends evolutionary theory to cultural development, has been elaborated upon as well as critiqued in the three decades since its publication, spawning the field of memetics and drawing ire from neurologists and anthropologists alike. In The Selfish Gene and in memetics at large, “memes” are components of culture that survive, propagate, and/or die off just like genes do. Memetics in general is uninterested in why these components survive, or the contexts that allow them to do so—and much as individual persons are considered unwitting actors within the gene pool at large, so too are our intentions deemed irrelevant when it comes to the transmission of culture.

On the scientific side, researchers such as the behavioral scientists Carsta Simon and William M. Baum worry that the scientific rigor implied by “memes as genes” has yet to be met by actual memetics research. Anthropologists and sociologists “charge that memetics sees ‘culture’ as a series of discrete individual units, and that it blurs the lines between metaphor and biology,” wrote the Fordham University researcher Alice Marwick in 2013. And, as I’ve written, thinking of memes solely in this way tends to “relegate agency to the memes themselves” as if they are not subject to human innovation, creation, and responses. Memetics, more interested in the movement of memes than their content, may be helpful in tracking or predicting meme lifespans, but cannot fully account for how human participation factors in.

The weakness of the memes-as-genes theory becomes more apparent in an online context. By Dawkins’s deliberately capacious definition, the word “meme” may apply to sayings, bass lines, accents, clothing, myths, and body modification. In this vein, a meme in terms of digital culture could mean a viral hashtag like #tbt, tweet threading as a form of storytelling, or netspeak. However, memes as they’re popularly discussed nowadays often index something much more specific—a phrase or set of text, often coupled with an image, that follows a certain format within which user adjustments can be made before being redistributed to amuse others. Also known as: a joke.

Jokes are more than funny business and, in fact, laughter (even in acronym form) is not the standard for defining what is or is not a joke. We often laugh at things that are not jokes (like wipeouts); and jokes do not always elicit laughter (like a bad wedding toast). That memes employ humorous devices does not de facto render them jokes. But as it so happens, memes and jokes do share several formal qualities. And looking at memes as jokes may also help answer why some memes dry up, and why and when others return.

“Only when it comes to jokes is the idea of ‘meaning’ so often vehemently denied,” Elise Kramer, an anthropologist currently at the University of Illinois, wrote in a 2011 study of online rape jokes. “Poems, paintings, photographs, songs, and so on are all seen as having meaning ‘beneath’ the aesthetic surface, and the relationship between the message and the medium is often the focus of appreciation.” Kramer’s point is not that jokes cannot be explicated or unpacked, but rather that jokes—and memes, I’ll add—uniquely and deliberately make depth inconsequential to their appreciation. As displayed by recent gaffes like Bill Maher’s “house nigger” joke and Tina Fey’s “let them eat cake” sketch, comedy remains the most resilient place for ethically dodgy art. Reading “too much” into jokes is frowned upon and offended audiences are often told “it’s not that deep.”

Memes are viewed the same way, even by those who write about them. There’s an obligatory defense embedded in most meme coverage, as if writers sense they must keep the analysis at a minimum lest they spoil the fun. In a love letter to Doge, Adrian Chen wondered if “by writing it I played a crucial role” in guiding the meme toward obsolescence, “proving once again that writing about internet culture is basically inseparable from ruining internet culture.” Last summer, while declaring Harambe too dark to be corporatized and therefore too weird to die, New York magazine’s Brian Feldman admitted “there are other ways to end the Harambe meme. Like writing a think piece about it.” A month later The Guardian’s Elena Cresci repeated the line like gospel: “When it comes to memes, there’s a rule: It is dead as soon as the think pieces come out.” She even likens memes to jokes directly, asserting that “when memes go mainstream it means they’re not funny anymore. Memes are just in-jokes between people on the internet, and everyone knows jokes are much less funny one you’ve explained them.”

This evaluation shows another way memes and jokes are similar: Both are, returning to Kramer, “aesthetic forms where felicity (i.e., ‘getting’ it) is seen as an instantaneous process.” Unlike a painting, novel, or even a rousing Twitter thread which one is expected to “savor” like “a good meal,” the person who does not get the joke or meme immediately is considered a lost cause. “The person who spends too much time mulling over a joke is accused of ruining it,” Kramer writes. Tech reporters included, apparently.

But a commonly held accusation doesn’t equal truth. We might observe a correlation between a summer of Harambe think pieces and its decline not long after, or blame The New Yorker for making Crying Jordan uncool, but it’s worth noting that such pieces exist because their subject matter has reached a certain critical mass that makes them worth writing about. (With all due respect, I don’t believe New York Magazine, The New Yorker, or even The Atlantic is propelling memes into zeitgeist.) “Mainstream” doesn’t exactly signal the death knell, either. The “white guy blinking” gif continues to make the rounds when called upon—following the season finale of Game of Thrones, for example—and the line “ain’t nobody got time for that,” from a popular 2012 meme, met hearty laughter and applause when I attended Disney’s Aladdin, the musical, this fall. Some memes “die” and come back again, some surge and then are all but obsolete. Applying theories on the joke might help explain why.

* * *

Because of the shared attributes between jokes and memes, research on jokes can provide a template for how to study memes as both creative and formulaic. That includes finally finding a satisfactory answer to how and why memes “die.” In a 2015 thesis, Ashley Dainas argues that what folklorists call the “joke cycle” is “the best analogue to internet memes.” The joke cycle describes the kinds of commonplace, well-circulated jokes that become known to mass culture at large, such as lightbulb jokes or dead-baby jokes. Unlike other jokes that are highly specific—an inside joke between two friends, for example—these jokes have a mass appeal that compels them to be shared and adjusted enough to stay fresh without losing the source frame. These jokes evolve in stages, from joke to anti-joke, and will retreat over time only to resurge again later, even a whole generation later.

Viewing jokes as cultural artifacts, researchers aren’t just concerned with plotting a joke’s life cycle but also the social contexts that make the public latch onto a specific joke during a certain time. Lightbulb jokes, for example, arose as a type of ethnic joke in the ’60s and “had swept the country” by the late ’70s, wrote the late folklorist Alan Dundes. The joke, with its theme of sexual impotence (something/one is inevitably getting screwed), was “a metaphor which lends itself easily to minority groups seeking power.” It was one means to thinly veil prejudices, using the joke as an outlet for anxieties about the civil-rights legislation achieved in the ’60s, and carried out in the ’70s and beyond. Hence most lightbulb jokes, even when they don’t cross ethnic or racial lines, tend to be a comment on some social, cultural, or economic position—“How many sorority girls does it take to change a lightbulb?” et al.

Dead-baby jokes became popular in around the same period, a time marked not only by racial upheaval but gendered, domestic changes alongside second-wave feminism: increased access to contraception, sex education in school, women forestalling or even forfeiting motherhood in favor of financial independence. While determining exact causal relationships is a sticky matter, Dundes advised, “folklore is always a reflection of the age in which it flourishes ... whether we like it or not.”

And so too memes. Like jokes, memes are often asserted to be hollow, devoid of depth, but it would be foolish to believe that. Memes capture and maintain people’s attention in a given moment because something about that moment provides a context that makes that meme attractive. This might provide a more satisfying, but also more expansive, answer than simple boredom for why memes fall out of immediate favor. The context that makes a meme, once gone, breaks it. New contexts warrant new memes.

The 2016 U.S. election season and aftermath brought into focus how memes become political symbols, from Pepe the Frog to protest signs. In Pepe’s case, the otherwise chill and harmless character created by artist Matt Furie in the early 2000s was on the decline until he got a new context when the alt-right reappropriated him leading into the election. Pepe was resurrected from obscurity when internet culture found a new need for the cartoon’s special brand of male millennial grotesquerie.

Memes don’t just arise out of atmospheric necessity but disappear as well. The same election season effectively killed off Crying Jordan, when perhaps the idea of loss suddenly became too poignant, too meaningful for the disembodied head of a crying black figure to read as playful. Memes catch on when we need them most and retreat when they are no longer attuned to public sentiment.

Ultimately, fans and founders of the old-school meme-distribution methods aren’t entirely wrong. Flash-in-the-pan memes like Dat Boi are limited by a format that restricts the meme’s ability to evolve to the next creative iteration of itself. Dat Boi—which didn’t have much going on beneath the surface weirdness of a unicycling frog—could mutate no further, got stale, and trailed off without the chance to become cyclical (the irony) in a way that would allow it to last beyond its moment. Harambe, for all its weirdness, could not survive much beyond the life of the news story that spawned it. (In the meantime, as a friend points out, the Cincinnati Zoo has been working overtime with PR for nine-month-old hippo Fiona, who’s since become something of an internet sensation herself.)

The “expanding brain” meme, however, continues to chug along for the greater portion of 2017. The meme, which mocks the infinite levels of intellectual one-upmanship common to any and all online discussions, is exactly what’s called for in this post-truth moment where everyone is a pundit. I foresee this one sticking around for a long while yet. Meanwhile, it’s easy to see why a festive meme like “couples costume idea,” would come and go in accordance with the month of October.

As Dundes cautioned with jokes, we should not be too confident in claiming cause and effect between memes and their present contexts. Time and distance can assist us in evaluating why some memes ignited our feed, why some burned out quickly, and why others stuck around. The answers to these questions are not so random, but suggestive of the cultural, political, and economic times we live in. Provided we actually remember the memes.

“The World Wide Web has become the international barometer of current events,” the music librarian Carl Rahkonen wrote back in 2000. “The life of a joke cycle will never be the same as it was before the internet.” No kidding. The pace of life online tests the durability of culture like nothing else before, but it is still ultimately culture. The memes we forget say as much about us as the memes that hold our attention—for however long that is. We create and pass on the things that call to our current experiences and situations. Memes are us.

This Is Fine
December 7th, 2017, 01:38 PM

In video after video posted to Twitter and Snapchat early this morning, the scene near Los Angeles was the same: a stream of cars moving through the Sepulveda Pass on the 405 driving right past seams of fire, walls of flame racing up and down hillsides. The comparisons were inevitable: Mordor. The famous “This is fine” dog.

And a question arose, too. Why are these people driving on this highway?

It seemed crazy. But for what became known as the Skirball fire, the emergency response worked pretty much as it should have, according to authorities at the California Highway Patrol, LA County Fire Department, and Cal Fire. The records of the incident generated by the highway patrol indicate that the response to the fire was routine.

The highway patrol worked in concert with the county’s firefighters and Caltrans, which is responsible for the state’s transportation network, to shut down the highway, which took some time, allowing many motorists to catch video of the fire.

“We always want to have the roadway open and free of danger,” said Sergeant Saul Gomez of the California Highway Patrol. “In a case like Skirball, even if there is a fire on the right shoulder, we will leave the highway open until we deem it unsafe for the motoring public or responder personnel.”

Shutting down the 405 is a serious change to make in the Los Angeles transportation system. The northbound 405 can carry up to 11,700 cars per hour at peak times. People are trying to get to or from work, to pick up children, to keep the city functioning. It’s not a decision that they want to make lightly. They entrust the officers on the scene to work with local fire officials to make the call on when shutting down lanes becomes necessary. They don’t need to appeal to any higher authority.

The highway patrol received its first report of the fire at 4:51 a.m. By 5:03 a.m., the highway patrol requested that a “sigalert” be issued, which indicates a “traffic incident that will tie up two or more lanes of a freeway for two or more hours.”

By 5:15 a.m., a highway-patrol unit sent the message “ENTIRE SEPULVEDA PASS IS ON FIRE—IT’S MOVING QUICK.” Seven minutes later, the highway patrol asked for Caltrans’s help with a “hard closure” of the highway.

But physically, it is impossible to instantly shut down a major highway. Officials from some authority have to be posted at every on-ramp and traffic has to be diverted. “To shut it down completely where it’s sealed and you’re safe to walk on the freeway takes about 30 minutes to an hour,” said Gomez.

So, calls kept coming in from people on the freeway, who were driving through and (quite reasonably) scared. “[Reporting parties] are stopped on the freeway and are afraid the fire is going to come down onto the freeway and burn them,” reads one highway-patrol report from 10 minutes after the procedure for shutting the freeway had begun.

To bring lanes of traffic under control, patrol cars will “run breaks,” swerving across lanes of traffic to bring cars to a controlled stop. That began roughly at 5:52 a.m.

By 6:31 a.m., all the northbound lanes were closed, but Joe Mendez drove by going south and took in the following scene.

By that time, the Los Angeles Police Department had been pulled in to help with other closures. Eventually the southbound side was briefly closed as well. For a short time, the legendarily busy freeway looked like this.

The empty 405 (Mario Tama / Getty Images)

By the 1 o’clock hour, the danger to the freeway had passed, and the lanes began to be reopened. Everyone that I talked to had seen the incident as just another day in the life of a city that’s often ravaged by fires.

What’s really changed, however, is the amount of video flowing out of scenes like this. Between Twitter and Snapchat’s geo-located video feature, it’s possible to see the raw experience of nearly anyone who is near the scene of a serious, but manageable, fire.

What would have been the breathlessly recounted story of the few becomes the vicariously lived experience of the many.

Has the Google of South Korea Found a Way to Save Struggling News Outlets?
December 7th, 2017, 01:38 PM

Walk into the headquarters of South Korea’s biggest search engine, Naver, and you could be in Silicon Valley. Like Google and Facebook, the company has an affection for bean bags and primary colors. There are oversized toys in the shape of emoji from Naver’s messaging app, Line. A green wall is lined with ferns, and there’s an immaculately designed library.

Also like Google and Facebook, Naver has a tense relationship with journalists. Though the company produces no journalism itself, Naver’s desktop and mobile news portal is South Korea’s most popular news site. (The second is another local portal, Daum.) Naver hosts stories by various outlets, somewhat similar to news-aggregation apps like Apple News. In a country where around 83 percent of the population accesses news online, the company has outsize control over what Koreans read and see.

Naver’s scale has allowed it to dominate advertising revenue in South Korea. Its success nationally is analogous to Google and Facebook’s seemingly insurmountable digital-advertising “duopoly” globally. Thanks to their immense reach and ability to target consumers online, these two tech giants have proved irresistible to advertisers, and Naver shares a measure of their advantage. Yu Seo Young, a deputy manager with the company’s news team, mentioned local newspapers sometimes call Naver an alligator or some other apex predator when describing its market power.

This hold that internet companies now have over digital advertising has left news outlets around the world in search of a sustainable business model. Some are doubling down on subscriptions; others rely on philanthropy. But Naver has an unusual model for working with Korean news publishers: The company directly pays 124 outlets as “Naver News in-link partners.” The outlets’ stories are published on Naver’s portal, making the site a one-stop source of articles and video and eliminating the need for readers to leave and visit the original news site. All the better for Naver’s own shopping platform and its own ads. (Another 500 or so news outlets are unpaid “search partners.” The site links to the publishers’ articles, much like Google News.) The total payout comes to more than $40 million per year.

For “in-link partners,” Naver’s model offers an alternative to relying on traffic from an aggregator like Google News, or schemes like Facebook’s Instant Articles that aim to share ad revenue. The partners have a negotiable relationship with the company that wants their work—a company that needs new content for readers each time they log on. Whether Naver’s compensation to publishers is sufficient, however, remains controversial. And like some of its fellow technology giants overseas, Naver’s news practices are under increasing scrutiny.

Naver’s content fee has become a sore point. Its terms are confidential, but local news producers are well aware that Naver itself earns healthy digital-ad revenue. In 2016, it made 2.97 trillion won ($2.7 billion) from advertising. “The news media who get the payment tend to be unsatisfied with their share of the amount,” said Sonho Kim, a senior researcher at the Korea Press Foundation. “Considering [Naver’s] revenue, the news media tend to think that $40 million is tiny.”

A new Naver program dubbed PLUS aims to create a more “balanced” relationship, according to Yu. The company is beginning to share ad revenue with news partner outlets—about $7 million—based on their number of page views, among other statistics. Another $3 million from ad revenue will pay for “experimental projects” like a new fact-checking effort run by Seoul National University that aims to assess the veracity of political statements. An extra internal fund, also worth $10 million, will be shared in 2018 to reward publishers for what Yu called “quality factors,” which are still being determined.

Money aside, politics is never far away in South Korea, and never far from Naver. In an email, Choi Ki Sung, a reporter with the Korean news channel YTN, said there is unconfirmed “suspicion” that Naver downgrades news stories on its portal that are unfriendly to whichever government is in power. It’s not uncommon to hear locals suggest the news portal initially tried to bury articles about the corruption scandal that led to President Park Geun Hye’s impeachment in early 2017.

A recent incident has only fed such rumors. In October, Naver apologized over allegations that the company manipulated the ranking of articles that criticized South Korea’s top football association on the request of the organization. The Korea Herald called it the “first confirmed case of news manipulation by the portal,” noting Naver’s power over what news the Korean public sees.

This is all happening in a media landscape characterized by extreme distrust. Only 23 percent of Koreans say they trust the news media. There are plenty of reasons for this: Journalists who unquestioningly champion the country’s powerful corporations are jokingly referred to as “Samsung scholarship students.” Entrenched ideological schisms between conservative and liberal news outlets online also have an impact. “When the internet first emerged in the early 2000s, online news and blogs were all left-wing or progressive. They dominated the internet,” said Ki-Sung Kwak, the chair of the University of Sydney’s Department of Korean Studies. “Once conservative newspapers realized their mistake, they heavily invested in the online-media business.”

In his view, besides convenience, one reason why Naver may be attractive to readers is because it appears somewhat politically agnostic. For now, it does this with human editors who decide which content should be selected on specific topics and issues within their section. There are about 20 editors in news, 15 in entertainment, and 15 in sports. A chief editor has final say over what appears on the portal.

But similar to Facebook’s notorious decision to fire its human editors in 2016, Naver may soon turn to machines. In response to the Park allegations, Yu said in an email that conservatives believe Naver is biased toward the left, and liberals believe Naver is biased toward the right. “Despite our efforts, human curation is still being criticized,” she said. “Therefore, we are planning to automate article placement with algorithm[s], which will be completed during the first quarter next year.”

Still, humans will create the news-selection algorithm, and humans have a say about which outlets appear on Naver’s news portal in the first place. In 2015, the company helped form the Committee for the Evaluation of News Partnership along with Kakao, the owner of the internet portal Daum and the country’s most popular messaging platform.

The committee has two key functions. The first is to evaluate which new outlets can supply news to portal sites. The second is to penalize news outlets that violate contract conditions, such as publishing sponsored or violent content, or clickbait. Sometimes this clickbait is an attempt to game the system: Naver provides a chart on its portal that shows the most popular search keywords in real time. According to Yu, media outlets might produce up to 30 almost identical articles about one popular keyword to win clicks.

Committee members are recommended by the Korean Newspapers Association and the Korean Broadcasters Association, among others, but the power of this unelected group over which publishers can access Naver’s traffic fire hose is a sensitive issue. The lack of openness about the assessment criteria is a common complaint.

Despite its problems, the committee arguably provides a “quality control” bulwark between the public and junk news. Google also vets publications that apply to be part of its news-aggregation service, but bots and bad actors continue to haunt social media, including on YouTube, which is owned by Google, and Facebook. “Everybody on Facebook can create content, including the fake news,” Kim pointed out. “But on Naver sites, not everyone can create content.”

Naver’s model is not a complete answer for the global media industry. The company is neither solely a news aggregator nor a social-media network. And the content partnerships Naver has established with news publishers are not immediately translatable in other markets, not least because Facebook and Google are so powerful elsewhere. In response to questions about Naver’s portal, the Google spokesperson Nic Hopkins pointed out that more than 80,000 publishers are accessible through Google News. Facebook declined to comment on the record.

Despite committees and “quality factors,” Naver is a profit-seeking company just like its American contemporaries—maybe even an “alligator” as some suggest. If news companies become too reliant on its payments, their independence could be compromised. But for a global media industry in search of a business model, Naver does offer an alternative path where news outlets have a direct financial relationship with the company that shares their content. For Kim, this is unique.“The platform, whether or not it is sufficient, tries to make some kind of collaborative effort and share profits with the news media,” he says.

Future Historians Probably Won't Understand Our Internet, and That's Okay
December 6th, 2017, 01:38 PM

What’s happening?

This has always been an easier question to pose—as Twitter does to all its users—than to answer. And how well we answer the question of what is happening in our present moment has implications for how this current period will be remembered. Historians, economists, and regular old people at the corner store all have their methods and heuristics for figuring out how the world around them came to be. The best theories require humility; nearly everything that has happened to anyone produced no documentation, no artifacts, nothing to study.

The rise of social media in the ’00s seemed to offer a new avenue for exploring what was happening with unprecedented breadth. After all, people were committing ever larger amounts of information about themselves, their friends, and the world to the servers of social-networking companies. Optimism about this development peaked in 2010, when Twitter gave its archive and ongoing access to public tweets to the Library of Congress. Tweets in the record of America! “It boggles my mind to think what we might be able to learn about ourselves and the world around us from this wealth of data,” a library spokesperson exclaimed in a blog post. “And I’m certain we’ll learn things that none of us now can even possibly conceive.”

Unfortunately, one of the things the library learned was that the Twitter data overwhelmed the technical resources and capacities of the institution. By 2013, the library had to admit that a single search of just the Twitter data from 2006 to 2010 could take 24 hours. Four years later, the archive still is not available to researchers.

Across the board, the reality began to sink in that these proprietary services hold volumes of data that no public institution can process. And that’s just the data itself.

What about the actual functioning of the application: What tweets are displayed to whom in what order? Every major social-networking service uses opaque algorithms to shape what data people see. Why does Facebook show you this story and not that one? No one knows, possibly not even the company’s engineers. Outsiders know basically nothing about the specific choices these algorithms make. Journalists and scholars have built up some inferences about the general features of these systems, but our understanding is severely limited. So, even if the LOC has the database of tweets, they still wouldn’t have Twitter.

In a new paper, “Stewardship in the ‘Age of Algorithms,’” Clifford Lynch, the director of the Coalition for Networked Information, argues that the paradigm for preserving digital artifacts is not up to the challenge of preserving what happens on social networks.

Over the last 40 years, archivists have begun to gather more digital objects—web pages, PDFs, databases, kinds of software. There is more data about more people than ever before, however, the cultural institutions dedicated to preserving the memory of what it was to be alive in our time, including our hours on the internet, may actually be capturing less usable information than in previous eras.

“We always used to think for historians working 100 years from now: We need to preserve the bits (the files) and emulate the computing environment to show what people saw a hundred years ago,” said Dan Cohen, a professor at Northeastern University and the former head of the Digital Public Library of America. “Save the HTML and save what a browser was and what Windows 98 was and what an Intel chip was. That was the model for preservation for a decade or more.”

Which makes sense: If you want to understand how WordPerfect, an old word processor, functioned, then you just need that software and some way of running it.

But if you want to document the experience of using Facebook five years ago or even two weeks ago ... how do you do it?

The truth is, right now, you can’t. No one (outside Facebook, at least) has preserved the functioning of the application. And worse, there is no thing that can be squirreled away for future historians to figure out. “The existing models and conceptual frameworks of preserving some kind of ‘canonical’ digital artifacts are increasingly inapplicable in a world of pervasive, unique, personalized, non-repeatable performances,” Lynch writes.

Nick Seaver of Tufts University, a researcher in the emerging field of “algorithm studies,” wrote a broader summary of the issues with trying to figure out what is happening on the internet. He ticks off the problems of trying to pin down—or in our case, archive—how these web services work. One, they’re always testing out new versions. So there isn’t one Google or one Bing, but “10 million different permutations of Bing.” Two, as a result of that testing and their own internal decision-making, “You can’t log into the same Facebook twice.” It’s constantly changing in big and small ways. Three, the number of inputs and complex interactions between them simply makes these large-scale systems very difficult to understand, even if we have access to outputs and some knowledge of inputs.

“What we recognize or ‘discover’ when critically approaching algorithms from the outside is often partial, temporary, and contingent,” Seaver concludes.

The world as we experience it seems to be growing more opaque. More of life now takes place on digital platforms that are different for everyone, closed to inspection, and massively technically complex. What we don't know now about our current experience will resound through time in historians of the future knowing less, too. Maybe this era will be a new dark age, as resistant to analysis then as it has become now.

If we do want our era to be legible to future generations, our “memory organizations” as Lynch calls them, must take radical steps to probe and document social networks like Facebook. Lynch suggests creating persistent, socially embedded bots that exist to capture a realistic and demographically broad set of experiences on these platforms. Or, alternatively, archivists could go out and recruit actual humans to opt in to having their experiences recorded, as ProPublica has done with political advertising on Facebook.

Lynch’s suggestion is radical for the archival community. Archivists generally allow other people to document the world, and then they preserve, index, and make these records available. Lynch contends that when it comes to the current social media, that just doesn’t work. If they want to accurately capture what it was like to live online today, archivists, and other memory organizations, will have to actively build technical tools and cultural infrastructure to understand the “performances” of these algorithmic systems. But, at least right now, this is not going to happen.

“I loved this paper. It laid out a need that is real, but as part of the paper, it also said, ‘Oh, by the way, this is impossible and intractable,’” said Leslie Johnston, director of digital preservation at the U.S. National Archives. “It was realistic in understanding that this is a very hard thing to accomplish with our current professional and technical constructs.”

Archivists are encountering the same difficulties that journalists and scholars have run up against studying these technologies. In an influential paper from last year, Jenna Burrell of the University of California’s School of Information highlighted the opacity that frustrates outsiders looking at corporate algorithms. Obviously, companies want to protect their own proprietary software. And the code and systems built around the code are complex. But more fundamentally, there is a mismatch between how the machines function and how humans think. “When a computer learns and consequently builds its own representation of a classification decision, it does so without regard for human comprehension,” Burrell writes. “Machine optimizations based on training data do not naturally accord with human semantic explanations.”

This is the most novel part of what makes archiving our internet difficult. There are pieces of the internet that simply don’t function on human or human-generated or human-parse-able principles.

While Seaver of Tufts University considered Lynch’s proposals to create an archival bot or human army to record the experience of being on an internet service plausible, he cautioned that “it’s really hard to go from a user experience to what is going on under the hood.”

Still, Seaver sees these technical systems not as totally divorced from humans, but as complex arrangements of people doing different things.

“Algorithms aren’t artifacts, they are collections of human practices that are in interaction with each other,” he told me. And that’s something that people in the social sciences have been trying to deal with since the birth of their fields. They have learned at least one thing: It’s really difficult. “One thing you can do is replace the word ‘algorithm’ with the word ‘society,’” Seaver said. “It has always been hard to document the present [functioning of a society] for the future.”

The archivist, Johnston, expressed a similar sentiment about the (lack of) novelty of the current challenge. She noted that people working in “collection-development theory”—the people who choose what to archive—have always had to make do with limited coverage of an era, doing their best to try to capture the salient features of a society. “Social media is not unlike a personal diary,” she said. “It’s more expansive. It is a public diary that has a graph of relationships built into it. But there is a continuity of archival practice.”

So, maybe our times are not so different from previous eras. Lynch himself points out that “the rise of the telephone meant that there were a vast number of person-to-person calls that were never part of the record and that nobody expected to be.” Perhaps Facebook communications should fall into a similar bucket. For a while it seemed exciting and smart to archive everything that happened online because it seemed possible. But now that it might not actually be possible, maybe that’s okay.

“Is it terrible that not everything that happens right now will be remembered forever?” Seaver said. “Yeah, that’s crappy, but it’s historically quite the norm.”

Looking for the Linguistic Smoking-Gun in a Trump Tweet
December 4th, 2017, 01:38 PM

President Donald Trump’s behavior on Twitter routinely drives entire news cycles. This weekend, he showed that a single word within a single presidential tweet can be explosive.

Trump raised alarm bells in his published response to the news that his former national security adviser, Michael Flynn, pleaded guilty to lying to the FBI.

The tweet published to Trump’s account clearly implied that he already knew that Flynn had deceived the Feds when he fired him back in February: “I had to fire General Flynn because he lied to the Vice President and the FBI. He has pled guilty to those lies. It is a shame because his actions during the transition were lawful. There was nothing to hide!”

That unleashed a frenzy of speculation about whether Trump had just admitted to obstructing justice, since it seems he must have known that Flynn had committed a felony when he was pressuring then-FBI director James Comey to ease up on the Flynn case.

But then came word that maybe Trump didn’t write the tweet after all. The Washington Post reported that “Trump’s lawyer John Dowd drafted the president’s tweet, according to two people familiar with the Twitter message.” The Associated Press also identified Dowd as the one who “crafted” the tweet, citing “one person familiar with the situation,” though Dowd himself declined to make a comment to the AP.

Attributing the tweet to Dowd set off a new round of incredulous chatter. Would the president’s lawyer really compose a tweet like that on his client’s behalf, especially one that seemed so incriminating? One widely shared response, from a person who tweets from the account @nycsouthpaw, focused on a single word in the tweet as grounds for skepticism: “We’re supposed to believe John Dowd wrote pled instead of pleaded?”

Others argued that Dowd could very well have used pled as the past tense of plead. Harvard Law School professor Jonathan Zittrain noted, “I’ve seen lawyers write each. It’s not like, you know, hung and hanged.” Indeed, both pleaded and pled are considered acceptable by American usage guides—though, in many newsrooms, pled is considered a rookie mistake, which helps explain why some journalists seized on it.

Pled actually dates back to the 16th century, and though it never gained much traction in British English, it has been gaining in popularity in American English over the past few decades. Some prefer pled because they think pleaded sounds wrong, based on analogous past-tense forms like bleed/bled and feed/fed. Plenty of legal types don’t seem to mind pled, at least not in the United States. In fact, when the blog Above the Law polled its readers in 2011, 57 percent of the 1,311 respondents preferred pled to pleaded.

But what of Dowd himself?

I searched through the LexisNexis news database to try to find his preference for forming the past tense of plead, and I discovered an example from January 2010, when Dowd was representing the billionaire hedge-fund manager Raj Rajaratnam, who was standing trial for insider trading. As quoted in The Wall Street Journal, Dowd said of Rajaratnam, “He’s pled not guilty and we intend to try his case and demonstrate that he’s innocent.” (Rajaratnam was later found guilty and is currently serving an 11-year prison sentence.)

So Dowd, too, is on record as a pled user. That single word does not betray some nonlawyerly voice—Trump’s or anyone else’s—so we can’t point to it as evidence for who really wrote that tweet. It would be a tidy solution to isolate the use of pled as a kind of “tell” disproving the attribution of the tweet to Dowd, but it is in fact exceedingly difficult to be able to identify such a linguistic smoking gun.

One skeptic on Twitter wrote, “A forensic linguist could rule out Dowd in 5 minutes. Once that happens, Trump has no backpedal.” Actual forensic linguists would be hard-pressed to rule Dowd in or out on the basis of a single tweet, however. The field of authorship analysis requires significant amounts of textual data in order to be reliable. First, one would need to compile past texts firmly attributed to the potential authors—in this case, Trump and Dowd. That could at least establish idiosyncratic patterns of style and usage, but for a low-frequency word like pled, even that approach may prove fruitless. (For what it’s worth, Trump had never previously used pled in a tweet, according to the Trump Twitter Archive. Trump’s only use of pleaded is from a news article he quoted.)

Authorship analysis has had some notable success stories, but not involving something as slender as a tweet. In 2013, I wrote in The Wall Street Journal about how forensic analysts helped determine that Harry Potter author J.K. Rowling had written a crime novel, The Cuckoo’s Calling, under the pen name Robert Galbraith. I asked one of the experts, Patrick Juola of Duquesne University, to detail his approach in a guest post for Language Log, a blog about linguistics that I contribute to. When a commenter remarked that it would be interesting to analyze the anonymous Twitter tip that had set off the investigation, Juola replied, “It would indeed be interesting, but authorial analysis of tweets is HARD. Not enough data, you see.”

That sort of challenge has been taken up by some forensic linguists, such as Tim Grant of Aston University, who has been working on techniques to analyze tweets and other short-form messages. But such an analysis would be even trickier in this case, since Dowd—if he is indeed the true author of the controversial tweet—may have been attempting to mimic the Twitter voice of Trump. That could explain the exclamation point at the end, for instance—a classic Trumpian touch. Or Dowd could have “drafted” the tweet with Trump subsequently making revisions or at least adding some finishing touches. Then we’d be dealing with an even murkier co-authorship situation. It looks like we’ll simply have to wait for further illumination of the story behind the tweet’s composition: No single word or punctuation mark is going to give the game away.

How the Index Card Cataloged the World
December 1st, 2017, 01:38 PM

Like every graduate student, I once holed up in the library cramming for my doctoral oral exams. This ritual hazing starts with a long reading list. Come exam day, the scholar must prove mastery of a field, whether it’s Islamic art or German history. The student sits before a panel of professors, answering questions drawn from the book list.

To prepare for this initiation, I bought a lifetime supply of index cards. On each four-by-six rectangle, I distilled the major points of a book. My index cards—portable, visual, tactile, easily rearranged and reshuffled—got me through the exam.

Yet it never occurred to me, as I rehearsed my talking points more than a decade ago, that my index cards belonged to the very European history I was studying. The index card was a product of the Enlightenment, conceived by one of its towering figures: Carl Linnaeus, the Swedish botanist, physician, and the father of modern taxonomy. But like all information systems, the index card had unexpected political implications, too: It helped set the stage for categorizing people, and for the prejudice and violence that comes along with such classification.

* * *

In 1767, near the end of his career, Linnaeus began to use “little paper slips of a standard size” to record information about plants and animals. According to the historians Isabelle Charmantier and Staffan Müller-Wille, these paper slips offered “an expedient solution to an information-overload crisis” for the Swedish scientist. More than 1,000 of them, measuring five by three inches, are housed at London’s Linnean Society. Each contains notes about plants and material culled from books and other publications. While flimsier than heavy stock and cut by hand, they’re virtually indistinguishable from modern index cards.

The Swedish scientist is more often credited with another invention: binomial nomenclature, the latinized two-part name assigned to every species. Before Linnaeus, rambling descriptions were used to identify plants and animals. A tomato, for example, was a mouthful: Solanum caule inermi herbaceo foliis pinnatis incisis. After Linnaeus, the round fruit became Solanum lycopersicum. Thanks to his landmark study, Systema Naturae, naturalists had a universal language, which organized the natural world into the nested hierarchies still used today—species, genus, family, order, class, phylum, and kingdom.

In 18th-century Europe, Linnaeus became a household name. “Tell him I know no greater man on earth,” said Jean-Jacques Rousseau of his Swedish idol. Like other savants of his day, Rousseau saw the study of plants as a moral pursuit, a virtuous escape into nature. Germany’s man of letters, Johann Wolfgang von Goethe, confessed that after Shakespeare and Spinoza, no one had influenced him more than Linnaeus. “God created—Linnaeus arranged,” went the adage.

But despite his meteoric success, Linnaeus had a problem. The man who made order from nature’s chaos did not have a good management system for his own work. His methods for sorting and storing information about the natural world couldn’t keep up with the flood of it he was producing. Linnaeus’s appearance only added to an aura of disorder. Stunned visitors described the prince of botany as a “markedly unshaven” man in “dusty shoes and stockings.” Writing about himself, Linnaeus was even less charitable: “Brow furrowed. A low wart on the right cheek and another on the right side of the nose. Teeth bad, worm-eaten.”

Worms aside, the real issue vexing Sweden’s top scientist was how to manage a data deluge. He had started out collecting plants in the woods of his native southern Sweden. But as his profile grew, so did his research and writing, and the number of students under his wing. Achieving scientific renown of their own, Linnaeus’s students sent him specimens from their travels in Europe, Russia, the Middle East, West Africa, and China. According to Charmantier and Müller-Wille, most botanists of the era employed a team to manage their affairs that would keep track of correspondence and categorize specimens. But not Linnaeus, “who preferred to work alone.” Starting in the 1750s, he complained in letters to friends of feeling overworked and overwhelmed. Burnout, it turns out, isn’t a modern condition.

* * *

Linnaeus’s predicament wasn’t new, either. In her book Too Much to Know: Managing Scholarly Information before the Modern Age, the historian Ann Blair explains that since the Renaissance, “the discovery of new worlds, the recovery of ancient texts, and the proliferation of printed books” unleashed an avalanche of information. The rise of far-flung networks of correspondents only added to this circulation of knowledge. Summarizing, sorting, and searching new material wasn’t easy, especially given the available tools and technologies. Printed books needed buyers. And while notebooks kept information in one place, finding a detail buried inside one was another story. Finishing an academic dissertation wasn’t just a test of erudition or persistence; dealing with the material itself—recording, searching, retrieving it—was a logistical nightmare.

Many scholars, like the 17th-century chemist Robert Boyle, preferred to work on loose sheets of paper that could be collated, rearranged, and reshuffled, says Blair. But others came up with novel solutions. Thomas Harrison, a 17th-century English inventor, devised the “ark of studies,” a small cabinet that allowed scholars to excerpt books and file their notes in a specific order. Readers would attach pieces of paper to metal hooks labeled by subject heading. Gottfried Wilhelm Leibniz, the German polymath and coinventor of calculus (with Isaac Newton), relied on Harrison’s cumbersome contraption in at least some of his research.

Linnaeus experimented with a few filing systems. In 1752, while cataloging Queen Ludovica Ulrica’s collection of butterflies with his disciple Daniel Solander, he prepared small, uniform sheets of paper for the first time. “That cataloging experience was possibly where the idea for using slips came from,” Charmantier explained to me. Solander took this method with him to England, where he cataloged the Sloane Collection of the British Museum and then Joseph Banks’s collections, using similar slips, Charmantier said. This became the cataloging system of a national collection.

Linnaeus may have drawn inspiration from playing cards. Until the mid-19th century, the backs of playing cards were left blank by manufacturers, offering “a practical writing surface,” where scholars scribbled notes, says Blair. Playing cards “were frequently used as lottery tickets, marriage and death announcements, notepads, or business cards,” explains Markus Krajewski, the author of Paper Machines: About Cards and Catalogs. In 1791, France’s revolutionary government issued the world’s first national cataloging code, calling for playing cards to be used for bibliographical records. And according to Charmantier and Müller-Wille, playing cards were found under the floorboards of the Uppsala home Linnaeus shared with his wife Sara Lisa.  

In 1780, two years after Linnaeus’s death, Vienna’s Court Library introduced a card catalog, the first of its kind. Describing all the books on the library’s shelves in one ordered system, it relied on a simple, flexible tool: paper slips. Around the same time that the library catalog appeared, says Krajewski, Europeans adopted banknotes as a universal medium of exchange. He believes this wasn’t a historical coincidence. Banknotes, like bibliographical slips of paper and the books they referred to, were material, representational, and mobile. Perhaps Linnaeus took the same mental leap from “free-floating banknotes” to “little paper slips” (or vice versa). Sweden’s great botanist was also a participant in an emerging capitalist economy.

* * *

Linnaeus never grasped the full potential of his paper technology. Born of necessity, his paper slips were “idiosyncratic,” say Charmantier and Müller-Wille. “There is no sign he ever tried to rationalize or advertise the new practice.” Like his taxonomical system, paper slips were both an idea and a method, designed to bring order to the chaos of the world.

The passion for classification, a hallmark of the Enlightenment, also had a dark side. From nature’s variety came an abiding preoccupation with the differences between people. As soon as anthropologists applied Linnaeus’s taxonomical system to humans, the category of race, together with the ideology of racism, was born.

It’s fitting, then, that the index card would have a checkered history. To take one example, the FBI’s J. Edgar Hoover used skills he burnished as a cataloger at the Library of Congress to assemble his notorious “Editorial Card Index.” By 1920, he had cataloged 200,000 subversive individuals and organizations in detailed, cross-referenced entries. Nazi ideologues compiled a deadlier index-card database to classify 500,000 Jewish Germans according to racial and genetic background. Other regimes have employed similar methods, relying on the index card’s simplicity and versatility to catalog enemies real and imagined.

The act of organizing information—even notes about plants—is never neutral or objective. Anyone who has used index cards to plan a project, plot a story, or study for an exam knows that hierarchies are inevitable. Forty years ago, Michel Foucault observed in a footnote that, curiously, historians had neglected the invention of the index card. The book was Discipline and Punish, which explores the relationship between knowledge and power. The index card was a turning point, Foucault believed, in the relationship between power and technology. Like the categories they cataloged, Linnaeus’s paper slips belong to the history of politics as much as the history of science.


This post appears courtesy of Object Lessons.

Social Apps Are Now a Commodity
December 1st, 2017, 01:38 PM

I am very old. As in, my age begins with a four, a profoundly uncool number for an age to start with. Which is to say, too old to use Snapchat, the image-messaging social-network app. Founded in 2011, it’s most popular among young people, who spurned Facebook and even Instagram for it. Why? For one part, it’s because we olds are on Facebook and even Instagram. But for another part, it’s just because Snapchat is a thing that young people use, and so other young people use it. That’s how the story goes, anyway.

But maybe something simpler is happening. Perhaps there is no magic in any of these apps and services anymore. Facebook and Instagram, Snapchat and GroupMe and Messenger and WhatsApp and all the rest—all are more or less the same. They are commodities for software communication, and choosing between them is more like choosing between brands of shampoo or mayonnaise than it is like choosing a set of features or even a lifestyle.

* * *

It’s not just a myth that Snapchat is for young people. Sixty percent of its users are 25 years old or less, and 37 percent fall between 18 and 24, that revered demographic of marketers. Almost a quarter of the app’s users are under 18. But that’s also changing, as more millennials—or should I say 30-somethings—pick up the app too.

One reason is that older folk have, for years, been using Instagram, which is owned by Facebook (which they’ve also used since college or high school). Facebook has been systematically copying Snapchat’s most popular features, including Stories, ephemeral 24-hour photo montages of a user’s activity. It’s no surprise: Facebook has enormous wealth and leverage, including 2 billion users of its core service and over a billion each for its messaging apps, Messenger and WhatsApp. Instagram boasted some 30 million users when Facebook acquired the company in 2012, and that figure has swelled to 800 million in the five years since. Snapchat is stuck around 170 million users.

Snap, the company that makes Snapchat, has shed more than half its value since peaking just after going public in March of this year. Its current market cap, about $16 billion, is still more than the $3 billion Facebook offered to acquire the company. And Google had reportedly bid up to $30 billion for the company in advance of the IPO. Although Snap denied the rumor, if true it’s a figure the company might regret having spurned to go it alone.

Snap’s attempts to shake its doldrums have been mixed. A year ago, the company introduced a $130 pair of glasses called Snap Spectacles, which took photos for its app. Initial demand was high, but it soon collapsed. Less than half of buyers were still using the gadget a month after purchase. Snap wrote down almost $40 million dollars in excess inventory.

It also paid around $100 million to acquire a Canadian company called Bitstrips, integrating its Bitmoji product, a stylized avatar, into the Snapchat service (it can also be used as a stand-alone stickers in other messaging and social-media apps). Bitmoji gave every Snapchat user a similar but strikingly accurate cartoon image of themselves. And it offered a new platform for advertising, via sponsored avatars—an approach that the company had previously explored with ad-supported photo filters and lenses.

Bitmoji also gave Snapchat a way to represent its users in a standard, physical way. It released Snap Map in June, which allows friends to see one another’s activity on a map.

None of these innovations really helped turn around Snap’s decline. Though still popular among its core audience, its stock dropped 20 percent in November, after the company missed revenue, profit, and user-growth expectations. Its user base had grown by only 3 percent since the previous quarter.

* * *

This week, Snap CEO Evan Spiegel announced a redesign of Snapchat. The app is notoriously unintuitive for the unfamiliar, and the redesign, which Spiegel promised in the wake of dismal Q3 results, hopes to boost adoption by making it easier for new users.

The announcement came in the form of a short video of Spiegel explaining the “new and improved” Snapchat. The video is disorienting—a video of a video shoot, really, a jaunty yellow backdrop displayed as a prop instead, the camera cutting between views of Spiegel and those of a film crew filming Spiegel. “Look at us working hard,” the video’s subtext telegraphs.

Its text is more mysterious. As a nonuser of Snapchat, Spiegel’s promises struck me as so vague and woolly that they might apply to anything whatsoever. He vows to make Snapchat “more personal.” Your friends “aren’t content; they’re relationships,” he opines, rationalizing the redesign’s shift of sponsored posts into their own view, separated from friends. This move, which gets its own post-production textual overlay on either side of Spiegel’s gaunt body, amounts to “separating the social from the media.” All of it makes perfect sense so long as you don’t think about it even for a second.

The changes themselves are straightforward. Snapchat’s default view is the camera. To the left are chats and stories from friends, and to the right those from publishers and sponsors. For the first time, the friends view works as an algorithmic feed rather than a chronological list—like Facebook, Instagram, and Twitter. When it rolls out over the coming weeks, the new Snapchat will privilege close friends over acquaintances—if indeed your close friends are the ones you send snaps to more often. In that respect, the app will work a lot more like messaging apps like GroupMe, WhatsApp, and Messenger than social-media apps like Instagram or Twitter. In Spiegel’s dressed-up script, that amounts to “organizing Snapchat around your relationships to make it more personal.”

Every other social app intersperses sponsored posts with organic content for visibility, so it’s hard to imagine why anyone would ever choose to look at Snapchat’s sponsored view, where that material is safely sequestered from view. But perhaps the company hopes to take a hit on ad and sponsor-post performance, if not revenue, to demonstrate user growth to the street.

Most notable to me, watching the video, was the incessant refrain that the redesign would inspire its audience to “Express yourself with your friends.” At its start, Spiegel deadpans, Snapchat “made it easier to express yourself by talking with pictures.” The redesign, he promises, will make it easier to find the people you want to express yourself with. The result? “The friends you want to talk to will be there when you want to talk to them.”

As a Snapchat nonuser, it’s easier for me to hold these claims at a distance. But not because they are incredible or stupid or even bad. Rather, because they are so ordinary and humdrum that it seems ridiculous to suggest that they are remarkable. In essence, Snapchat hopes to compete by taking a weird, unique, unseemly product that lures a specific audience partly on account of those reasons, and transforming it into yet another chat app—even if a photo-centric one—that works more or less like any other.

It makes me wonder, what makes someone choose one app over another? Why use Twitter over Facebook, or Instagram over Snapchat, or GroupMe over Messenger? Knowing how bitterly old I am, I ask my kids, teenagers who use Snapchat like most teens do.

“Snapchat still has more features, even given the stuff that Instagram stole from them,” my daughter explains. Her scorn for Instagram, which she also uses, is palpable. Among those features are best friends, which is just what it sounds like, and streaks, a kind of high score for daily posts back and forth with specific Snapchat friends. She has never had a Facebook account and thus doesn’t use Messenger, although she does use GroupMe (which is owned by Microsoft) for group chats.

My son, who is a couple years older, did get Facebook immediately upon eligibility at age 13, although he never uses it anymore. He tells me that most of his friends use Snapchat or GroupMe for ordinary, day-to-day conversation—not just for social preening, as many old people imagine they do. I feel even longer in the tooth when he explains that Messages—the blue-bubble iPhone replacement for texting—is something he hardly ever uses. Except to talk to old people, like his parents. Texts, once the bastion of screen-shocked youth, have already gone the way of email, that dour and grizzled technology of geriatrics.

* * *

People do the things they do. They start because they are convenient, or ready-to-hand, or shared by peers, or momentarily novel. The college students who started using Facebook in the mid-2000s did so because it was new, accessible at universities, and spreading quickly. The parents and friends and grandparents who did so in the years following picked it up because others were doing so. WhatsApp gained popularity in nations where SMS remained expensive, but contacts were still identifiable by telephone number.

There are functional differences between the services. Instagram is made of pictures, but more oriented toward photographic aesthetics than Snapchat, which uses pictures as messages. That’s what Spiegel means by “talking with pictures”; it’s phatic visual communication, as my colleague Rob Meyer puts it. Likewise, Twitter’s constraint, at 140 or 280 characters, makes it different from Facebook. GroupMe’s ease of adding multiple people to a chat separates it from Messenger, or Apple’s Messages.

But even though those differences make a difference, they are also remarkably small differences. And increasingly smaller, as the various services borrow and steal from one another, as Instagram and Snapchat and others have done. Instead of distinctive services with clear value propositions, these apps are becoming commodities. All commodities have real product differentiators—Coke tastes different (ahem, better) than Pepsi; Secret shills deodorants specially formulated for women, while Old Spice dudes them up for men, and so on. But at bottom, the rapport people have for a particular product or service comes down to a hazy affinity developed from discovery, branding, peer adoption, and other accidents of timing and circumstance. Repeated use, not to mention product marketing, reinforces that choice over time.

Snapchat doesn’t make me feel old because it’s so much cooler than Twitter or Messenger, nor because I’m so uncool that I couldn’t possibly grasp it (even if both claims might also be true). Rather, it’s just that Snapchat is the communication service that young people have picked up of late. Telecommunication apps are universal and numerous enough that they support shifting trends and fashions.

It’s no different than drinking Jolt Cola or listening to Fugazi or wearing Z Cavariccis or subscribing to call waiting or keeping the line busy while dialing up to Prodigy—all things that were also cool, at one time. The difference is: Nobody thought of soft drinks or music or apparel—or even telephony and computer services, really—as problems to be atomized into individual companies, let alone public ones, meant to corner the market. They were just commodities differentiated through unique, but temporary, variations in form, function, and packaging. Indeed, the whole reason commodities are commodities is because they are so cheap and easy to produce that competition encourages that differentiation.

It would be a relief if this might yet become the future of computing. No more innovation and disruption and other chest-thumping boasts. No more world-changing deliverance from the stodgy, legacy paradigms of yore. Just communication offerings in the form of software, offered in various styles with nuanced distinction, each doing their part in letting people interact, so that they can get on with life beyond their rectangles.

Prepare for the New Paywall Era
November 30th, 2017, 01:38 PM

If the recent numbers are any indication, there is a bloodbath in digital media this year. Publishers big and small are coming up short on advertising revenue, even if they are long on traffic.

The theory of digital publishing has long been that because people are spending more time reading and watching stories on the internet than other places, eventually the ad revenue would follow them from other media types. People now spend more than 5.5 hours a day with digital media, including three hours on their phones alone.

The theory wasn’t wrong. Ad dollars have followed eyeballs. In 2016, internet-ad revenue grew to almost $75 billion, pretty evenly split between ads that run on computers (desktop) and ads that run on phones (mobile). But advertising to people on computers is roughly at the level it was in 2013. That is to say, all the recent growth has been on mobile devices. And on mobile, Facebook and Google have eaten almost all that new pie. These two companies are making more and more money. Everyone else is trying to survive.

In a print newspaper or a broadcast television station, the content and the distribution of that content are integrated. The big tech platforms split this marriage, doing the distribution for most digital content through Google searches and the Facebook News Feed. And they’ve taken most of the money: They’ve “captured the value” of the content at the distribution level. Media companies have no real alternative, nor do they have competitive advertising products to the targeting and scale that Facebook and Google can offer. Facebook and Google need content, but it’s all fungible. The recap of a huge investigative blockbuster is just as valuable to Google News as an investigative blockbuster itself. The former might have taken months and costs tens of thousands of dollars, the latter a few hours and the cost of a young journalist’s time.

That’s led many people, including my colleague Derek Thompson, to the conclusion that supporting rigorous journalism requires some sort of direct financial relationship between publications and readers. Right now, the preferred method is the paywall.

The New York Times has one. The Washington Post has one. The Financial Times has one. The Wall Street Journal has one. The New Yorker has one. Wired just announced they’d be building one. The Atlantic, too, uses a paywall if readers have an ad blocker installed (in addition to the awesome Masthead member program, which you should sign up for).

Many of these efforts have been successful. Publications have figured out how to create the right kinds of porosity for their sites, allowing enough people in to drive scale, but extracting more revenue per reader than advertising could provide.

Paywalls are not a new idea. The Atlantic previously had a different one for a while in the mid-’00s. The Adweek article announcing that this paywall was being pulled down is a fascinating time capsule. Paywalls, back then, were often seen as a way of protecting the existing print businesses.

“Despite worries that putting a print magazine’s full content online for free will erode the subscriber base, nothing could be further from the truth,” wrote Adweek. “Subscribers largely obtain magazines for advantages that can be garnered only from the print version (portability, ease of use); those looking only for free articles to read can easily look at websites that offer similar content instead.”

The idea that the paid revenue from a site itself could contribute to earnings in a meaningful way was not even considered. And that made sense. The scale of most magazine sites was tiny.

“In 2007, TheAtlantic.com tripled its traffic to 1.5 million unique users and 8 million page views,” Adweek continued. “During that period, digital ad sales grew to 10 percent of total ad sales, and traffic has grown faster than The Atlantic’s digital-marketing investment.”

The first time around, many paywalls simply did not work. But times have changed. The New York Times’ success in transforming itself into a company that is markedly less dependent on advertising than it has been in recent years has emboldened many other publishers. The Times now makes more than 20 percent of its revenue on digital-only subscriptions, a number which has been growing quickly. In absolute terms, last quarter, the Times made $85.7 million from these digital products.

The question is: Can media organizations that are not huge like the Times or The Washington Post, or business-focused like the Financial Times or The Wall Street Journal, create meaningful businesses from their paywalls?

Here’s the optimistic case that they can.

For one, many digital-media properties have much larger audiences than they used to. The Atlantic had 42.3 million visitors in May. It’s hard for sites to capture the value of that whole audience with advertising alone, especially because traffic can be spiky. But in marketing terms, that whole audience is just the top of the funnel. And that’s a big funnel. Let’s say that 1 percent of visitors to The Atlantic’s site subscribed for $10 a month. (I’m not privy to conversations about pricing. I’m just making this up.) Do the math: That’s $50 million a year, which would be very significant for the magazine’s business.

It’s not just the difference in scale for different media properties, though. The reigning ideology of the internet has broken apart. In the wild days of the ’00s, paywalls were seen as breaking the way the web worked, with sites linking to each other to build on the knowledge we were collectively producing. As it turns out, the culture of links fell apart as digital journalism became more focused on traditional sections publishing individual stories and not blogs that linked to each other frequently. The rise of platform-specific video and the dominance of Facebook finished off the web as it was known in the ’00s.

Today’s intentionally porous paywalls, too, keep information flowing, even as they help companies capture subscribers.

The infrastructure for buying stuff on the internet also has gotten a lot better. There are the different payment platforms like PayPal and ApplePay. There are initiatives at Apple and Facebook to make it easier to sell subscriptions. There is the mere fact that people buy tons of stuff on their phones now, and have become increasingly comfortable with the idea of paying for content. (Thanks, New York Times!)

When the paywall was first introduced in early 2011, people flocked to Google to search for the term. It just wasn’t a familiar idea.

U.S. interest in the term “paywall,” according to Google

Six years later, this way of charging people for websites is no longer unusual. People may not always love them, but they know the deal.

Smaller magazines may be able to use the same digital-marketing tools to drive subscriptions in the way that other “lifestyle” brands have. One reason that Facebook has grown so quickly is that it has proven to be a very effective machine for putting in marketing dollars and getting out revenue. In the ’00s, or even five years ago, it would have been very difficult to target ads at readers except on one’s own site. Now, all the targeting tools that have made the digital-advertising business more difficult for publications can help the paid-content business.

A lot of questions remain, however, especially as more publications turn to paywalls. The group of people who pay for any kind of journalism is still relatively small. Based on the current numbers of subscribers to the big publications, we’re probably talking a group of people that numbers in the single-digit millions. That’s the addressable market.

So, as more and more publications try to woo these particular consumers, how will they split up their dollars? How annoyed will subscribers become remembering another half dozen passwords? If everyone goes all-in on paywalls, who would make your list?

Maybe the whole model of single sites running their own paywalls will not carry the day. Somebody is going to try to make the process of accessing this paid content easier and cheaper, whether it’s Apple, Flipboard, Facebook, or a new entrant.

So, expect lots of paid-content experiments, many taking the form of paywalls, but there’ll be everything from apps to merch to live events. Digital media has lived and died with advertising, but now it’s mostly just dying.

Donald Trump’s Obsession With Time Magazine Makes Almost Too Much Sense
November 29th, 2017, 01:38 PM

If you had to pick the year Time magazine’s “person of the year” jumped the shark, you’d probably start with 2006. That was when Time looked at the rise of open-publishing platforms like Wikipedia, YouTube, and Facebook, and decided the most influential person was the collective “you.” It was cheesy, trite, and had the exact effect Time wanted: Everybody talked about it.

Time’s annual “person of the year” designation has always been a gimmick, going all the way back to Charles Lindbergh in 1927. Time was once a scrappy upstart, but for decades it was a very serious must-read magazine. Now that the heyday of newsmagazines has receded, the spectrum of people who have ever held a physical copy of Time in their hands has shriveled. Yet the “person of the year” still creates a residual media buzz—attention that, as my colleague David Graham wrote in 2012, really isn’t justified. “Year-end wrap-ups,” he wrote, “simply aren’t news.”

Well, they’re not news until the president of the United States gets involved, anyway.

Donald Trump has always had a gift of making a big deal out of nothing. “I can only say that the press couldn’t get enough,” he wrote in The Art of the Deal in 1987. Back then, he was still trying to figure it out. Now, as president, whipping the press into a frenzy is, for Donald Trump, muscle memory.

“Time Magazine called to say that I was PROBABLY going to be named ‘Man (Person) of the Year,’ like last year,” Trump tweeted on Friday, “but I would have to agree to an interview and a major photo shoot. I said probably is no good and took a pass. Thanks anyway!”

The magazine wasted little time firing back: “The President is incorrect about how we choose Person of the Year. TIME does not comment on our choice until publication, which is December 6.”

“Man of the Year”—it became “person” in 1999—is arguably the Trumpiest possible tradition in magazine journalism. And not just because of Trump’s apparent obsession with appearing on the cover. In June, The Washington Post discovered that what looked like a back issue of Time magazine featuring Trump on the cover—and displayed in at least five of Trump’s clubs—was, in fact, doctored. The fake cover featured a serious looking Trump with twin, glowing assessments: “Donald Trump: The ‘Apprentice’ is a television smash!” and “TRUMP IS HITTING ON ALL FRONTS . . . EVEN TV!” The real issue of Time magazine at the time featured the actress Kate Winslet on the cover.

One can only imagine the conversations that took place among the Time editorial team in the past 24 hours, but one thing almost certainly came up: Trump’s bizarre decision to insert himself into, of all things at this dramatic moment in American life, Time’s pick for a fading print-era tradition is decidedly good for business. (And, by the way, Time actually did name Trump “person of the year” in 2016.) This, at a time when the print-magazine business is generally not thriving. Time’s newsroom is still home to many great journalists, but the economic environment for newsweeklies is absolutely brutal. Remember Newsweek? It once routinely determined the national conversation. Not so anymore. (To answer your question, yes, Newsweek does still exist.) Meanwhile, the Koch brothers are backing the Meredith Corporation’s possible purchase of the storied publication, according to The New York Times.

Donald Trump and Marla Maples meet characters from the television show Dinosaurs during lunch at the Plaza Hotel in 1992. (Henry Ray Abrams / Reuters)

One way to force people to—if not actually care—pay attention: a defiant tweet from President Trump. On one hand, why on Earth would Donald Trump—the president of the United States, Donald Trump—care what Time magazine is doing? On the other, of course Donald Trump is fixated on Time magazine’s “person of the year” contest. It’s as simultaneously weird and unsurprising as if Trump started griping about room service at the Plaza, or bar service at Elaine’s, or pick-your-own-1990s-New-York-City-reference. Donald Trump is a man whose concept of wealth is all Manhattan circa 1989. And in Manhattan in 1989, Time magazine was the king of the newsstand.

Trump became a public figure and a celebrity at Time’s apex. But more than that, Time is the perfect manifestation of Trump’s attitude toward success. To understand why a person like Donald Trump would gravitate toward a magazine like Time, you have to look at both of their histories.

In the 1980s, when Time was still a cash cow and Trump was still cementing himself as a mainstay on Page Six, Time was a very serious publication and Trump was a semiserious fixture in the tabloids. Cable news—Trump’s preferred journalistic medium today—was still in its infancy. Trump seemed preordained for gossip-rag stardom: There was the personal drama—an ugly divorce, a dramatic altercation in Aspen, the alleged infidelity with a beautiful eventual second wife—and, of course, the very public bankruptcy and surprising redemption. Trump enjoyed relentless pop-cultural relevance through it all—he had a cameo in Home Alone 2, remember—and seemed forever destined for B-list celebrity at best. Looking back at the media landscape of Trump’s younger years, the idea of Trump on the cover of Time seemed as silly as Trump buying the Plaza Hotel, or being elected president of the United States.

Donald Trump, we now know, is a man who is energized by the improbable.

Trump’s wealth—and the persona he built around it—has always been aspirational. The sprawling expanses of cold marble, the New Jersey gold fixtures, the aggressively nouveau riche aesthetic. The superlatives that spill so easily from Trump’s lips—everything’s the biggest, and the best, and the most. Time wasn’t so different at first. The magazine was founded by rich men playing with their fathers’ money—no member of the founding staff was more than three years out of college, the magazine’s historian Theodore Peterson once wrote. As my colleague Robinson Meyer and I wrote of Time in 2015, Time became the most powerful media instrument of mid-century America. In the early 1980s, as Trump was rising to fame, Time was absolutely flush with cash:

“So flush,” John Podhoretz wrote in Commentary, describing what it was like to work for Time in the 1980s, “that the first week I was there, the World section had a farewell lunch for a writer who was being sent to Paris to serve as bureau chief ... at Lutèce, the most expensive restaurant in Manhattan, for 50 people. So flush that if you stayed past 8, you could take a limousine home ... and take it anywhere, including to the Hamptons if you had weekend plans there. So flush that if a writer who lived, say, in suburban Connecticut, stayed late writing his article that week, he could stay in town at a hotel of his choice.”

All of it sounds absurd today, as over-the-top as the very idea of “person of the year.”

In 2006, when Time’s person of the year was “you,” Trump was riding out a triumphant phase in what had become a roller coaster of successes and setbacks—still basking in his fame as the host of The Apprentice, a reality game show on NBC.

Trump was 60 years old then, yet the experience was still formative in a way. The Apprentice was a hit, and it was measured in television ratings. Even when Trump became president of the United States of America—a spectacle so much bigger than any game show—he fixated on the size of the inauguration crowd. Old habits.

Donald Trump holds up a cover of Time magazine featuring his portrait at a campaign fundraiser in 2015. (Brian Snyder / Reuters)

The man who wrote the Time story in 2006, the celebration of “you,” described his methodology at the time: “When you’re picking Time’s Person of the Year, you play a little bit historian of the future. What is the story of 2006 that people will remember? And more, I think, than any of the political or military stories, the shift in power from consumers of media becoming producers of the media ... that will really change a lot of things.”

Trump’s tweets are proof of this shift, evidence that Time’s “person of the year” pick in 2006 was actually prescient. Trump’s very presidency is not easily disentangled from this. Time matters to Trump, not just because of the narcissism it takes to care in the first place—let alone tweet about it—but because Time and Trump both arose in a bygone era. A moment of wealth and possibility in New York, and by extension America. Time always saw itself as the magazine for a very specific kind of American greatness. Trump, he swears, is just the same.

The Algorithm That Catches Serial Killers
November 28th, 2017, 01:38 PM

“I wonder if we could teach a computer to spot serial killers in data,” Thomas Hargrove thought as he parsed the FBI’s annual homicide reports. The retired news reporter would soon answer his own question. He created an algorithm that, in his words, “can identify serial killings—and does.”


In The Dewey Decimal System of Death, a new film from FreeThink, Hargrove explains how “the real world is following a rather simple mathematical formula, and it’s that way with murder.”


The numbers are startling. According to Hargrove, since 1980, there have been at least 220,000 unsolved murders in the United States. Of those murders, an estimated 2,000 are the work of serial killers. Many of these cases are not ultimately reported to the Justice Department by municipal police departments; Hargrove has assiduously obtained the data himself. His Murder Accountability Project is now the largest archive of murders in America, with 27,00 more cases than appear in FBI records.


Hargrove has put the database to work with an algorithm that solves an informatics problem called “linkage blindness.” In the U.S. justice system, Hargrove explains, “the only way a murder is linked to a common offender is if the two investigators get together by the water cooler and talk about their cases and discover commonalities.” Hargrove’s algorithm is able to identify clusters of unsolved murders which are related by the method, location, and time of the murder, as well as the victim’s gender.


Most recently, Hargrove utilized his software to discover and alert the police department in Gary, Indiana of 15 unsolved strangulations in the area. “It was absolute radio silence,” he says in the film. “They would not talk about the possibility that there was a serial killer active.” After Hargrove was rebuffed, seven more women were killed. He says it was “the most frustrating experience of my professional life.”


You Had to Be There
November 28th, 2017, 01:38 PM

There’s a question going around on Twitter, courtesy of the writer Matt Whitlock: “Without revealing your actual age, what’s something you remember that if you told a younger person they wouldn’t understand?”

This simple query has received, at this date, 18,000 responses. Here is just a tiny selection:

Etcetera. You are welcome to peruse the replies looking for your precise moment in time to be pinned to the screen, wiggling.

It is obvious that most of the relics of earlier eras that stick with people are technological, or at least about the material culture of technology.

It is banal to note that these technological eras are becoming shorter. No one expects today’s social networks or electronics to last as long as AM radio or the internal combustion engine or even three-channel broadcast television. That’s not how products work anymore. Many things are designed for obsolescence and the rest end up there anyway with frightening speed.

Most of the time, this occasions alarm. Everything’s speedin’ up! Future shock, etc.

But there is pleasure, “That’s my shit!” kind of pleasure, in possessing this knowledge of obsolete lived experience. As the technologies we live with exist for less and less time, a more precise psychological archaeology becomes possible. The slices of time that we invoke when we say, “Remember when you could record songs off the radio” or “Remember the sound of a dial-up modem” or “Remember Facebook before News Feed” or “Remember borders on Instagram pictures” become thinner and thinner. A decade becomes a couple years becomes a few months. The combination of your age and life station and technological possibilities fuse into a fixed moment that’s more meaningful than any generational label or simple number. What gives these moments their power is that they mark our time as it is smashed to pieces by market forces and what is sometimes called progress.

And yet you share something real with the other people who understand, like really understand, one of these moments. And no one else can join the club. It can’t be looked up on Wikipedia or gathered from Google or glimpsed in a YouTube video. You can’t buy that experience at Etsy or eBay or even the weirdest vintage place in all of Oakland.

You had to be a body in a place in a culture. You had to be there.

Nazis Are Just Like You and Me, Except They're Nazis
November 27th, 2017, 01:38 PM

Editor’s Note: This essay was inspired by “A Voice of Hate in America’s Heartland,” published in The New York Times on November 25, 2017. The Times reflected on the shortcomings of the piece after it was met with outrage and ridicule.

ELKO, Nevada—“The water is almost ready,” he said, bending down to look for the little bubbles. “Once you see the bubbles rising to the surface, you know the water is hot enough to cook the pasta.”

Steve Stevenson dispenses wisdom freely, though he is not a chef. He is 32 years old, and he drinks whole milk, and his tattoos are nonviolent. The kitchen spice rack contains only garlic powder. He wears jeans made of denim. The T-shirt on his back has a tag sticking out, and I read it as he leans in to eye the pot of water: “100 percent cotton.”

“What can I say,” jokes Stevenson, as he sees me taking note of the spice rack. “I like garlic powder.”

We both chuckle. The shimmering evening sun glints off the porcelain saltshaker and casts a long shadow onto the linoleum. As I follow its path, his wife Stephanie appears in the kitchen doorway, an exasperated look on her face.

“You forgot to put the toilet seat down again,” she says, rolling her eyes and pulling her phone out of her back pocket. Stephanie is pretty. Her hair is saffron and flaxen, and she wears jeans also, and she has a wry smile.

Stephanie Stevenson is followed by a normal dog, who walks into the room with a slight limp, and Stephen pets it. He leans in.

“The Jews control all the money, and the world would be better off if they were dead,” he says, petting the dog. “Who’s a good boy?”

The question is rhetorical. I ask about the wallpaper.

Some people disagree with Stevenson’s political views.

“He’s a nice enough guy,” said the local grocer, Butch Tarmac, a registered Democrat. “He buys apples and pancake mix. I also like those things. But I guess we’ll have to agree to disagree on the bit about the one true race cleansing the soil and commanding what is rightfully theirs.”

“It’s totally fucked up,” said one person, whose name I didn’t catch.

Sometimes the Stevensons go to Applebee’s. There they like to order margaritas and onion rings and laugh about some of the paraphernalia on the walls.

“The World War II propaganda is just really far out,” laughs Stevenson. He does an impression of a hippie when he says “far out.” He has a full and radiant smile. I ask him if he had braces, and he says yes.

“Hitler gets a bad rap, but he was a pretty righteous dude,” he says, half addressing me, and half addressing his four wide-eyed children. We’re all crammed into the booth like a bunch of sardines. He tells me to only refer to him and his Nazi friends as “The Traditionalist Worker Party,” and I agree to do that.

I ask if the kids go to public school.

The Stevensons laugh.

“The schools are full of coloreds,” says Stephanie, smiling wryly. Her teeth shimmer in the reflected glow of the neon and the flaxen-colored nacho cheese. She is wearing a cotton-wool hoodie and her hair is in a hasty ponytail, spilling out in places like spaghetti in a full pot, in an attractive way. “I know I’m not supposed to say that, I know it’s not PC or whatever, but they are.”

She is wearing an armband with an embroidered swastika. It is available for purchase here. [Link redacted.]

I ask about the well-worn Aerosmith T-shirt peeking out through the open zipper of her hoodie. She says she has seen the band three times, and each time was “amazing.”

I press her on this. In my experience, the band declined since the late 1990s.

“They played ‘Dream On’ and ‘Walk This Way,’” she gushes.

Hadn’t she noticed that Steven Tyler’s vocal quality had significantly deteriorated over the past decade? Didn’t that matter at all?

“Those are her favorite songs!” says her husband. I worry I’m hitting a nerve. The first Aerosmith concert was where Stephen proposed to Stephanie, and the sentimental magnitude was clearly formative to their romantic bond.

He adds that the song “Dude (Looks Like a Lady)” is feminist deep-state mind control. He instructs his kids to do violence to any boys who appear feminine.

The Stevensons have two cars and they are both green.

Dog Poo, an Environmental Tragedy
November 27th, 2017, 01:38 PM

In 1915, William Carlos Williams published a poem about dog waste. When industrial fertilizer replaced dung heaps, its spoils helped fund the spread of plastics. “Pastoral” shuns rural landscape in favor of a city scene, with an old man walking in the gutter. In Williams’s assessment, the man “gathering dog lime”—a euphemistic name for dog dung—does work “more majestic than / That of the episcopal minister.”

A 21st-century reader would likely find the man’s action unremarkable. Today, dog feces are understood to have dangerous levels of E. coli and salmonella, not to mention untold parasites. Therefore, they must be tucked away in plastic bags and deposited at the nearest poop station. Williams’s old man is significant for his dignity but not his occupation.

But it wasn’t always this way. Animal waste once provided a necessary ingredient for agriculture, especially at a local scale. When industrial methods of fertilization combined with germ theory, dung heaps became outmoded. Then the same chemical industries that synthesized fertilizer developed plastics, the materials now used in bags to clean up dog poop instead of recycling it.

* * *

Before Western societies built conduits to flush excrement into the waterways, it was piled into dung heaps for reuse. Even in the Middle Ages, the stuff was a source of valuable materials, even if a noxious one. For alchemists, dung heaps were a source of saltpeter and, for some, including the 12th-century master Morienus, they provided the first materials for fabricating the philosopher’s stone. In an era before the Bunsen burner, dung heaps provided chemical researchers with a source of constant, elevated temperatures. The complex preparations were placed in flasks and buried in piles of manure where they underwent “digestion”—a process of slow heating over many weeks, and one of the fundamental transformations in the alchemical tradition.

In the 19th century, physicians and public-health officials began to understand disease-transmission vectors with more precision, even though germ theory did not triumph until near the end of the century. Particularly in Western Europe, health reforms inspired large urban public works to deal with waste. Government officials inventoried communities, especially where there were concentrations of urban poor. Not surprisingly, they found huge piles of garbage, including human and animal waste, in the streets.

Edwin Chadwick’s 1842 “Report on the Sanitary Condition of the Laboring Population and on the Means of its Improvement” describes the era’s conflicting excremental economies with shock and disdain:

There were no privies or drains there, and the dung heaps received all filth which the swarm of wretched inhabitants could give; and we learned that a considerable part of the rent of the houses was paid by the produce of the dung heaps. Thus, worse off than wild animals, many of which withdraw to a distance and conceal their ordure, the dwellers in these courts had converted their shame into a kind of money by which their lodging was to be paid.

Chadwick’s laboring population was primarily urban, but 19th-century America retained much of its Jeffersonian, agrarian nature. As a result, excrement had a different value in the New World. In 1853, D.J. Browne published The Field Book of Manures (also called The American Muck Book). He devotes a chapter to dog feces, noting that “this manure, wherever it could be obtained in sufficient abundance, has been found to be, it is stated, the ‘most fertile dressing of all quadruped sorts.’” Browne goes on to describe an 18th-century English farmer from Bedfordshire with an abundance of setters and spaniels. Their dung reportedly enabled his gravelly fields to outperform those of his neighbors. He also describes the valuable “corrosive” power of white dog dung, which he attributes to a carnivorous, bone-heavy diet.

Such white dung is the probable source of the appellation “dog lime.” It is likely Williams’s old man with the majestic tread was not cleaning up after his pet like the modern dog walker, but gathering dog lime for his garden. As a result, it probably had a yield superior to his neighbors who did not stoop to such indignity.

* * *

During the 20th century, Western attitudes toward dog ordure shifted. The world is still full of dogs, but owners now fear the diseases this once-useful commodity represents. The triumphs of Joseph Lister, Louis Pasteur, and their fellow germ theorists explain part of the aversion, but not all of it. Industry had a greater role to play in the retirement of animal waste.

For alchemists, dung heaps provided chemicals and constant temperature. But modern chemists made the dung heap obsolete by producing synthetic alternatives. The change began with Friedrich Wöhler’s synthesis of urea in 1828, the first organic compound produced from inorganic compounds. Wöhler’s work paved to way for the Haber process, an industrial-scale ammonia-production method, and the birth of the modern fertilizer industry. (Fritz Haber is also known for managing the laboratory that developed Zyklon B, initially for use as an insecticide, later deployed by the Nazis in their death camps.)

Using Haber’s method in the 1920s, IG Farben, the German chemical cartel, scaled the production of urea from ammonium carbamate, providing the vast amounts of nitrogen required for modern fertilizer production. In a stroke, a nearly odorless powder that comes in a paper bag replaced the noxious, oozing dung heaps of old. Who needs to shovel American muck when you can sprinkle a sack of 10-10-10?

The enormous success of the commercial fertilizer and pesticide industries had far-reaching effects. IG Farben took another substance first discovered in the 19th century, polystyrene, and began large-scale production of plastic. Today, ordinary people encounter this material in the form of packing peanuts or disposable drink coolers. The related material polyurethane followed in 1939; it is fundamental for wood finishes, among many other industrial uses. In the years leading up to World War II, Wallace Carothers tried to industrialize the production of nylon at DuPont, and in the United Kingdom another chemical giant, the English Imperial Chemical Industries, was working to scale production of polyethylene, the most common plastic. Production of polyethylene began in 1935; by the 1950s a series of related discoveries enabled the industrial production of HDPE polyethylene, used for plastic bags, PVC pipes, and other goods.

* * *

In the plastic bag now used to collect and dispose of it, dog lime comes full circle. Low-density polyethylene is found in grocery bags, sandwiches wrappers, and the bags in those boxes set outside city parks—the modern way to gather dog lime. The dung is considered useless and dangerous, and the plastic used to wrap and dispose of it righteous and safe. And yet plastic, the opposite of an oozing, dangerous dung heap, is now a greater threat to human survival, choking sea life, degrading the food chain, polluting the air during manufacture, and overwhelming landfills after disposal. In poop-bag form, polyethylene serves as a sanitary barrier, though its ability to conduct heat still gives contemporary gatherers momentary pause—that jolt of recognition that one is handling, albeit indirectly, an active, vibrant substance.

As Williams’s dog-lime gatherer shows, a long-standing human practice—the cycling of organic materials for the promotion of new life—became converted into a sanitary society disconnected from the ecological practice cleaning up dog poo supposedly also represents. Dog waste is now timeless, wrapped in an eternal casing and buried in a landfill, anaerobic environments where even biodegradable bags stay intact.

So much for the poop scooper’s moral triumph over the church minister. Even so, the conclusion of Williams’s poem still rings true: “These things / Astonish beyond words.”


This post appears courtesy of Object Lessons.