Technology | The Atlantic
The 1850s Response to the Racism of 2017
August 16th, 2017, 01:24 PM

Last night, Tucker Carlson took on the subject of slavery on his Fox News show. Slavery is evil, he noted. However, slavery permeated the ancient world, he said, as reflected in the on-screen graphics.

Screengrab of Tucker Carlson’s Tuesday-night show

On Twitter, recent University of Toronto English Ph.D. graduate Anthony Oliveira noted, “Here's Tucker Carlson right now on Fox making the *exact* pro-slavery case (bad but status-quo and well-precedented) made 160 years ago.”

It sounds like a particular variety of Twitter gallows humor, not meant to be taken quite seriously. But it is not a joke.

This precise series of ostensible mitigating factors around the institution of American slavery were, in fact, advanced by pro-slavery forces through the 19th century. And it got me wondering: Given that The Atlantic was founded as an abolitionist magazine before the Civil War, might there be an article or two that might address Carlson’s warmed-over proto-Confederate arguments?

And indeed, there are.

Take Carlson’s bullet point, “Until 150 years ago, slavery was rule.”

Well, yes. Slavery was legal in some American states. But how did this happen, especially when other countries began abolishing slavery early in the 19th century? In our second issue, Edmund Quincy put his pen to “Where Will It End?” And he doesn’t mess around. Slavers had power because they went on bloody conquests to open up new territory for slavery.

The baleful influence thus ever shed by Slavery on our national history and our public men has not yet spent its malignant forces. It has, indeed, reached a height which a few years ago it was thought the wildest fanaticism to predict; but its fatal power will not be stayed in the mid-sweep of its career ... Slavery presiding in the Cabinet, seated on the Supreme Bench, absolute in the halls of Congress—no man can say what shape its next aggression may not take to itself. A direct attack on the freedom of the press and the liberty of speech at the North, where alone either exists, were no more incredible than the later insolences of its tyranny ... The rehabilitation of the African slave-trade is seriously proposed and will be furiously urged, and nothing can hinder its accomplishment but its interference with the domestic manufactures of the breeding Slave States ... Mighty events are at hand, even at the door; and the mission of them all will be to fix Slavery firmly and forever on the throne of this nation.

Indeed, in the early days of The Atlantic, the violent battle over whether Kansas would become a slave state raged. In the “Kansas Usurpation,” from Issue 4, our author details the endless skulduggery that slavers perpetrated “to force the evils of slavery upon a people who cannot and will not endure them.”

And how about the idea that ancient peoples also held slaves? The Atlantic didn’t address Greek slaveholding, but takes on their admirers, the Romans. In a piece called “Spartacus,” published in Issue 3 in 1858, the author explicitly differentiates the Roman version of slavery from the American.

“Fowell Buxton has happily translated [the Roman motto], ‘They murdered all who resisted and enslaved the rest.’ But it was as slaveholders that the Romans most clearly exhibited their impartiality,” the piece states. “They were above those miserable subterfuges that are so common with Americans. They made slaves of all, of the high as well as the low—of Thracians, as well as Sardinians, of Greeks and of Syrians as readily as of Scythians and Cappadocians.”

With ever-increasing rigor from colonial times, the American system explicitly made only people with African ancestry subject to chattel slavery, i.e. they were the only people whose children were born enslaved and who would die enslaved, absent an extraordinary circumstance. American slavery was different.

To be clear, this isn’t just about Carlson. My target is the implicit idea that American slavery was not historically, distinctly terrible. It was. There is no parallel. While other countries—and states within the Union—were banning slavery, the South was intensifying slavery in several different ways.

First, the ideological and theological interpretation of slavery in the South began to change. The specific and perpetual enslavement of African people had seemed to Jeffersonian Americans as an evil that was ebbing away. “In the late 18th century, most Americans believed that slavery, as institutionalized dependence, was neither good nor practical, and so would fade before the action of natural forces under the new, free political system,” writes John Patrick Daly in When Slavery Was Called Freedom: Evangelicalism, Proslavery, and the Causes of the Civil War.

But as abolitionists began to succeed in the northern states, chattel slavery of black human beings began to be theologically promoted as something to be proud of, possibly even holy, in the South. “Good slaveholders, they maintained, gave the institution its character—that is, goodness,” Daly writes. “This formulation allowed proslavery spokesmen to denounce the historically evil institution of slavery while defending Southern practices: Slaveholders in the evil form of slavery were bad men; the Southerners were good, and the source of their wealth untainted. Good—and especially evangelical—slaveholders supposedly redeemed the institution of slavery.”

Second, the old colonial state slaveowners were making a business out of selling the people they enslaved south and west. This became a lynchpin of the region’s wealth as agriculture declined there. Black people were chained together in Virginia and the Carolinas and marched to Georgia, to Florida, to Mississippi, to Texas. Whatever networks of family and community they’d been able to build within the oppressive violence of slavery were destroyed (again).

Ed Baptist tells this story in The Half Has Never Been Told: Slavery and the Making of American Capitalism. “The massive and cruel engineering required to rip a million people from their homes, brutally drive them to new, disease-ridden places, and make them live in terror and hunger as they continually built and rebuilt a commodity-generating empire,” he writes, “this vanished in the story of a slavery that was supposedly focused primarily not on producing profit but on maintaining its status as a quasi-feudal elite, or producing modern ideas about race in order to maintain white unity and elite power.”

Third, the gin-powered cotton economy relied on huge financial investments to open up new cotton land ever farther south and west. A series of financial bubbles ran in those directions, with literal securities issued to slaveowners secured by the bodies of enslaved people.

“African American bodies and childbearing potential collateralized massive amounts of credit, the use of which made slaveowners the wealthiest people in the country,” write Ned and Constance Sublette in The American Slave Coast: A History of the Slave-Breeding Industry. “When the Southern states seceded to form the Confederacy they partitioned off, and declared independence for, their economic system in which people were money.”

To make their loan payments, these speculator-slavers created the brutal “whipping machine,” which drove massive productivity gains at the expense of the health and well-being of the already oppressed people working in the fields.

The returns from cotton monopoly powered the modernization of the rest of the American economy, and by the time of the Civil War, the United States had become the second nation to undergo large-scale industrialization,” Baptist writes. “In fact, slavery’s expansion shaped every crucial aspect of the economy and politics of the new nation—not only increasing its power and size, but also, eventually, dividing U.S. politics, differentiating regional identities and interests, and helping to make civil war possible. The idea that the commodification and suffering and forced labor of African Americans is what made the United States powerful and rich is not an idea that people necessarily are happy to hear. Yet it is the truth.”

It was this marriage of new ideological underpinning, the incredible profits the gin-powered cotton industry could produce, and the new modes of capitalization and management that American slaveowners developed that make American slavery different and worse from those that preceded it.

The drive to keep opening up cotton land to feed the slaver-speculator economy also led to genocidal atrocities against Native Americans, as well as the imperial project of snatching the western part of the continent from Mexico, which had abolished slavery in the 1820s.

In April 1861, with the slaveholder’s rebellion beginning, The Atlantic published an essay by Charles Francis Adams Jr., the grandson of John Quincy Adams, called “The Reign of King Cotton.”

“Throughout the South, whether justly or not, it is considered as well settled that cotton can be profitably raised only by a forced system of labor,” Adams wrote. “With this theory, the Southern States are under a direct inducement, in the nature of a bribe, to the amount of the annual profit on their cotton-crop, to see as many perfections and as few imperfections as possible in the system of African slavery.”

But the bribe didn’t stop getting paid at the Mason-Dixon line. Even New England, hotbed of abolitionism and birthplace of this magazine, got rich on textiles spun in the factories along the Merrimack. Where do you think they got the cotton for the City of Spindles? Baptist tells the story of the Collins Axe Works, which sold hundreds of thousands of axes into the western parts of the South, where they were given to enslaved black people to clear the forests. Hundreds of millions of trees fell through black labor performed with these axes. And back on the Farmington River, a white factory owner and his associates got rich.

“All told, more than $600 million, or almost half of the economic activity in the United States in 1836, derived directly or indirectly from cotton produced by the million-odd slaves—6 percent of the total U.S. population—who in that year toiled in labor camps on slavery’s frontier,” Baptist calculates.

There is no escaping the basic facts of our history. Plato, Muhammad, and the Aztec empire did not have the cotton gin or the luxuries that came from the securitization of enslaved people. Native American slaveholders didn’t shape and take advantage of emergent American capitalism to subdue a continent.

Given all this, no wonder the neo-Confederates keep fighting to keep their heroic monuments. Understanding the breadth and depth of the American slavery’s evil would undermine not just their dedication to busts of Robert E. Lee, but the whole moral project of seeing whiteness as a sign of virtue.

This is what Confederate flag wavers mean when they say they are “fighting for their heritage.” They are fighting for the right to declare their ancestors good, despite the evidence of the horrors they perpetrated, which rival anything that happened in the 20th century.

And what they’re counting on is that Americans, no matter when their families arrived across seas or rivers, will excuse the Confederate flag-wavers because they want to believe only the best stories about our country, too.

There is no excuse. That other people at other times owned slaves—Greek, African, or Native American—does not excuse the system of oppression that we erected on this continent to build this country.

“Many of those people were there to protest the taking down of the statue of Robert E. Lee. So this week, it’s Robert E. Lee, I noticed that Stonewall Jackson’s coming down,” President Trump said yesterday at a press conference. “I wonder, is it George Washington next week? And is it Thomas Jefferson the week after? You know, you really do have to ask yourself, where does it stop?”

What if the answer is that it doesn’t? The evil of slavery and the white supremacy it embedded in the fabric of the country go all the way back to the beginning. And our history needs to honestly tell the story of James Madison dying without freeing a single one of the 100 enslaved people who worked for him right alongside his call, quoted in The Atlantic in 1861, to leave the words slavery out of the Constitution so that it would “be the great charter of Human Liberty to the unborn millions who shall enjoy its protection, and who should never see that such an institution as slavery was ever known in our midst.”

We can excise the words, but we can never scrub the blood from the soil.

Can the U.S. Government Seize an Anti-Trump Website's Visitor Logs?
August 16th, 2017, 01:24 PM

Suppose you were to click on this link: This one, right here.

It will take you to the website of Disrupt J20, which organized some of the “direct action” protests on the day of President Donald Trump’s inauguration in Washington, D.C. The site contains general information about civil disobedience and political protests, and it advertises several Washington-specific events.

Some of the protests on Inauguration Day turned violent, and the U.S. government has since charged more than 200 people with felony rioting or destruction of property in connection to events on January 20. It alleges that some of the suspects were connected to the Disrupt J20 effort.

Yet if you clicked that link above—even if you were nowhere near Washington on Inauguration Day—the government is now allegedly interested in you.

The U.S. Department of Justice is attempting to seize the visitor logs and IP addresses of anyone who has visited DisruptJ20.org, as well as any email addresses, user logs, and photos collected by the website, according to DreamHost, a Los Angeles–based web host and domain-name registrar.

This data encompasses more than 1.3 million IP addresses, as well as the email addresses and photos of thousands of people, the company said. DreamHost is not politically connected to DisruptJ20, but it provided paid web-hosting services for the group.

DreamHost has so far refused to comply with the government’s search warrant, arguing that it constitutes “investigatory overreach and a clear abuse of government authority.”

“That information could be used to identify any individuals who used this site to exercise and express political speech protected under the Constitution’s First Amendment. That should be enough to set alarm bells off in anyone’s mind,” said a blog post published to the company’s website on Monday.

A spokesperson for the U.S. Attorney’s Office for the District of Columbia did not respond before publication. A spokesman for the U.S. Department of Justice declined to comment.

Digital-privacy and civil-rights advocates were quick to criticize the scope of the government’s warrant. But experts in computer crime law said it wasn’t immediately obvious that the warrant was illegal.

“The Department of Justice isn’t just seeking communications by the defendants in its case. It’s seeking the records of every single contact with the site—the IP address and other details of every American opposed enough to Trump to visit the site and explore political activism,” wrote Ken White, a criminal-defense lawyer and former assistant U.S. attorney.

He continued:

The government has made no effort whatsoever to limit the warrant to actual evidence of any particular crime. If you visited the site, if you left a message, they want to know who and where you are—whether or not you did anything but watch TV on inauguration day. This is chilling, particularly when it comes from an administration that has expressed so much overt hostility to protesters, so relentlessly conflated all protesters with those who break the law, and so deliberately framed America as being at war with the administration’s domestic enemies.

“No plausible explanation exists for a search warrant of this breadth, other than to cast a digital dragnet as broadly as possible,” said Mark Rumold, a senior staff attorney at the Electronic Frontier Foundation, in a blog post. The EFF is assisting DreamHost in its opposition to the warrant.

“The Fourth Amendment was designed to prohibit fishing expeditions like this. Those concerns are especially relevant here, where [the Department of Justice] is investigating a website that served as a hub for the planning and exercise of First Amendment–protected activities,” he said.

In an email, Rumold added that the government had successfully seized visitor logs for other websites in the past. “But I’ve never seen anything on this scale, where we’re talking about millions of users and there’s no attempt whatsoever to narrow the scope (either by date, time, or user),” he told me.

“I don’t think there are precedents one way or another on this,” Orin Kerr, a law professor at George Washington University, told me.

“It’s not obvious to me whether the warrant is problematic,” he elaborated in an article at The Washington Post. The government’s search warrant instructs DreamHost to turn over all its records about DisruptJ20.org. As Kerr understands it, DreamHost wants the government to only legally be able to ask for certain records about the website. He continues:

There’s an interesting and unresolved issue presented here: What’s the correct level of particularity for a website? Courts have allowed the government to get a suspect’s entire email account, which the government can then search through for evidence. But is the collective set of records concerning a website itself so extensive that it goes beyond what the Fourth Amendment allows? In the physical world, the government can search only one apartment in an apartment building with a single warrant; it can’t search the entire apartment building. Are the collective records of a website more like an apartment building or a single apartment? I don’t know of any caselaw on this.

A hearing in D.C. Superior Court is scheduled for Friday.

President Trump has addressed the January 20 protests directly at least twice. Two days after they occurred, he belittled them and the Women’s March, on January 21, in a tweet: “Watched protests yesterday but was under the impression that we just had an election!” he said. “Why didn’t these people vote? Celebs hurt cause badly.”

Two hours later, he tweeted an addendum: “Peaceful protests are a hallmark of our democracy. Even if I don’t always agree, I recognize the rights of people to express their views.”

Nuclear Anxiety Returns to America
August 16th, 2017, 01:24 PM

Opening their paper on Friday morning, readers of The Wall Street Journal encountered a financial item of unusually wide interest. “Here’s a question that’s probably not on the CFA exam,” write Mike Bird and Riva Gold. “What happens to financial markets if two nuclear-armed nations go to war?”

What, indeed? We soon learn the consequences could be dire. Short-term interest rates would rise and long-term rates would fall. In a small skirmish between North Korea and the United States, the S&P 500 Index might post 20-percent losses “before it became clear that the United States would prevail.” But were another nuclear-armed power like Russia or China to get involved, the European Central Bank would have to take extreme action and issue “highly dovish forward guidance.”

Yet even amid this market turmoil, the savvy broker might still protect their investment. Sure, it’s true that the Japanese yen—a traditional safe haven—makes for a tricky bet when Tokyo is 800 miles downwind of Pyongyang. But there’s at least one good option left, according to analysts at the Nordea Group:

German bunds, the perennial refuge of panicked investors, would be good to own during a nuclear conflict too, with aggressive buying pushing the spread between German two- and 10-year bunds to 0.5 percentage point, from above one percentage point now.

At last, a good spread between German bonds. What a relief.

Nowhere does the story mention several other consequences of nuclear war: the urban firestorms; the plumes of sun-blotting black smoke; the crop die-offs across Asia, Africa, and North America; and the breakdown in the global communication network, whose destruction would render the German bund meaningless (no matter how favorable its yield curve). Nor did the story pause to note the millions of dead.

In the second week of August 2017, the American public began to do something that felt distinctly 20th-century: consider the consequences of a nuclear war. Two things became clear. First, nuclear anxiety had arrived again as a mass cultural force in American life—or, at least, in the accelerated internet-era version of it. Second, the public (and the American president) was obviously out of practice in thinking about it.

The episode began in earnest on Wednesday, when The Washington Post reported that at least one intelligence agency believed that North Korea could now miniaturize its nuclear weapons to fit into an intercontinental ballistic missile. If true, it represents an alarming technological breakthrough for the nation.

Then the president spoke. At an unrelated event at his private golf course in New Jersey, President Donald Trump warned of “fire and fury like the world has never seen” if North Korea continued to make threats against the United States. The next day—after aides tried to signal that his comments were improvised—he repeated them, saying maybe “fire and fury” was not “tough enough.”

Finally, on Friday morning, Trump tweeted that the U.S. military was “locked and loaded should North Korea act unwisely.”

Nuclear war—suddenly, everyone was talking about it, because the president was talking about it, in ways he isn’t supposed to.

Every late-night host riffed on the apocalypse. “Even Trump is scared by what he’s saying—look at him, he’s literally hugging himself,” quipped Seth Meyers, host of Late Night. (Trump gripped his torso as he uttered “fire and fury.”) A set of Democratic-connected advocacy groups, most of them not particularly radical, held an “emergency rally against nuclear war” at the White House.

And every professor or researcher of nuclear-weapons policy—normally confined to the dusty corners of university libraries and international security conferences—found themselves on a treadmill of radio and TV interviews. “[Nuclear weapons] are this kind of layer over the world, this abstract, intangible thing. We don’t talk or think about them,” says Lovely Umayam, who researches nuclear weapons at the Stimson Center, a think tank in Washington, D.C.

She said she felt glad there was renewed interest in the one technology that hangs over all U.S. international relations. But she also worried at how reactive the attention seemed. For the past week, she told me, she’s heard one constant question during TV and radio interviews: “Should we be concerned?”

“As an expert, I say, no, not quite,” she said. “We could really walk back on these words and develop de-escalation mechanisms. It’s horrible [Trump and Kim Jong Un] are talking this way, but it’s not the end of the world yet.”

“But then, as an anthropologist, I want to say: Yes, you should be concerned! You should always be concerned. And that you have to ask an expert that question—what does it say about your literacy of [nuclear] issues?” she said.

Kristyn Karl, a professor of political science at Stevens Institute of Technology, agreed that the public’s interest in nuclear weapons was way up—even if their understanding wasn’t. “The public is currently more aware of nuclear threats than they have been since the end of the Cold War,” she told me in an email.

That doesn’t mean they know much about them.

Americans flunk questions about basic nuclear security, Karl said, “such as identifying nuclear states, the scale of nuclear arsenals, etc.” Younger Americans also have little experience with nuclear weapons, especially compared with Baby Boomers.

Alex Wellerstein, a historian of nuclear weapons, also at the Stevens Institute, agreed that people seem more interested now. But he worries that they won’t stay that way once this crisis passes.

“It’s clear there is a sharp uptick of interest on nuclear questions,” he said in an email. “The question is, what kind of interest is it? Is it the kind of interest that will lead to a more sustained public interest on these topics? Or is it an ephemeral fear of the sort that comes and goes in a crisis?”

“American nuclear anxiety seems almost totally focused on foreign policy issues from small states—specifically Iran and North Korea. In that sense it is somewhat different than the period of the Cold War when the threat was much larger,” he said:

What I fear is that Americans will erroneously think that a war with either Iran or North Korea would be “no big deal” whereas we are (and were) much more aware that a war with Russia was totally unthinkable. War with Iran should be considered unthinkable (one need only look at what our war with Iraq has cost us, what monsters it created), and war with North Korea would come at a dearer cost than I think most people appreciate.

But when it comes to the prospect of nuclear annihilation, what is unthinkable and what isn’t? Americans are finding themselves back in the uneasy practice of imagining not the end of the world, but all the intermediate steps between now and then—the first warnings on the news, the orange streaks in the sky, the agony of waiting for ignition.

Writing three decades ago, the essayist and physician Lewis Thomas imagined a war with Russia and fell into despair. “My mind swarms with images of a world in which the thermonuclear bombs have begun to explode, in New York and San Francisco, in Moscow and Leningrad, in Paris, in Paris, in Paris. In Oxford and Cambridge, in Edinburgh,” he wrote:

This is a bad enough thing for the people in my generation. We can put up with it, I suppose, since we must.

What I cannot imagine, what I cannot put up with ... is what it would be like to be young. How do the young stand it? How can they keep their sanity? If I were very young, 16 or 17 years old, I think I would begin, perhaps very slowly and imperceptibly, to go crazy.

For today’s young people, looking to an uncertain future, at least there are German bonds to buy.

The 'Socially Liberal, Fiscally Conservative' Internet
August 15th, 2017, 01:24 PM

Both as a service and a company, Google has always been a convenient stand-in for the greater internet. “Googling” became shorthand for searching for something on the web. The “don’t be evil” line from its IPO prospectus became a catch-all politics for Silicon Valley.

So it’s not surprising that the outbreak of a new strain of reactionary politics has found its way to Google’s doorstep. First, Google fired the engineer James Damore for writing a memo explicitly opposing the company’s diversity and inclusion policies through a specious reading of the biology literature on gender and IQ differences in humans. Damore became an instant celebrity on the right and among free-speech absolutists.

Then, so-called “New Right” figure and noted conspiracy theorist Jack Posobiec—who President Trump retweeted this week—called for a march on Google offices in cities across the country for this Saturday, August 19. “Google is a monopoly, and its [sic] abusing its power to silence dissent and manipulate election results,” the announcement reads. “Their company YouTube is censoring and silencing dissenting voices by creating ‘ghettos’ for videos questioning the dominant narrative. We will thus be Marching on Google!”

This week, too, the Daily Stormer, a long-running neo-Nazi site, got cut off from its original hosting and went to Google, who also cut off the site.

Looking at this news in isolation, you might say: Google has become a right-wing target because it is a liberal institution. Certainly, it has not gone unnoticed on the right that Googlers overwhelmingly supported Hillary Clinton, both financially and operationally. And more generally, the Bay Area technology industry has become a key base of support for Democratic candidates.

But the ideology underpinning Silicon Valley does not fall on a strict left/right spectrum. For many years, scholars who study the internet’s dynamics have been saying that the public sphere—the place where civic society is supposed to play out, the place “free speech” advocates desire to see preserved—has been privatized by Facebook, Google, and the other big internet companies. Most on the left saw this privatization as part of a larger conservative (or—GASP—neoliberal) movement that was sapping the strength of the government-secured commons.

Zeynep Tufekci, a sociologist at Princeton University, made this connection in 2010. “This is about the fact that increasing portions of our sociality are now conducted in privately owned spaces. The implications of this are still playing out,” she wrote.

She cited a litany of examples running long before Google and Facebook: the outsourcing of key government functions to private contractors, the “dominance of corporate-owned media over the civic public sphere,” and the replacement of public parks with malls and “privately owned town squares.”

By moving our speech online, we entered a mall, where Facebook or Twitter or Google control the rules, not the U.S. government. Companies have different imperatives, the first of which is—for these companies, above all—to make money on advertising. By and large, this has led to a maximally permissive informational environment. On the occasions they have intervened to censor nipples or ban particular kinds of sites, their response has been: Hey, it’s a free market and you can always post photos to your LiveJournal, search with Bing, and chat on Gab.

These attitudes are why sometimes people describe Silicon Valley as “libertarian.” But most Silicon Valley people are wealthy Democrats who support progressive social causes, but do not want entrepreneurship and business restrained by the government (too much). They are the classic “socially liberal, fiscally conservative” people who the left (dirtbag and otherwise) loves to pillory.

Take the Los Angeles–based web host and domain-name registrar DreamHost, for example. On the one hand, they are fighting the Department of Justice, which has requested details on 1.3 million visitors to an anti-Trump website. “That information could be used to identify any individuals who used this site to exercise and express political speech protected under the Constitution’s First Amendment,” the company wrote on its blog. On the other, they are part of the cloud infrastructure of neo-Nazi groups like the American Nazi Party.

All of which puts both left-wing critics and right-wing marchers into different kinds of binds.

For the right, Google and Facebook are private actors who obey the market. Last I checked, strong-arming companies into doing the will of the people by extra-market means is not high on the list of conservative principles. Why can’t the market just decide? If people don’t like Google’s or Facebook’s decisions, they can head for the digital exits.

And on the left, Google and Facebook and the rest are, in fact, manifestations of neoliberalism. But because they’ve privatized the public sphere of discourse, they now possess the power to shut down white-supremacist sites, stop Daily Stormer links from circulating, and do it all without anyone having a legal or constitutional basis for stopping the “censorship.” The biggest de-platforming of all would come from getting the internet platforms themselves to use their power to stop the speech of white supremacists, neo-Nazis, birthers, “ironic racist” 4channers, and anti-Semites. And, indeed, a message projected onto Twitter headquarters this week called for just such a move:

As more and more of daily life online is consumed by the political storm in America, the “socially liberal, fiscally conservative” position that the tech companies have staked out is getting harder and harder to hold.

Small Towns Prepare to Cash In for the Solar Eclipse
August 15th, 2017, 01:24 PM

On August 21, the moon's orbit will bring it directly between the Earth and the sun, creating a total solar eclipse in the United States for the first time since 1979. One of the first towns perfectly positioned for the most dramatic view is Keizer, Oregon. Resident Matt Rasmussen is one of many people living along the eclipse’s “path of totality” looking to make the most of this once-in-a-lifetime opportunity—and not just to take in the sights.

Rasmussen said a friend living in nearby Portland, which will only see a relatively mundane partial eclipse, casually suggested he try to rent out his house for the weekend of the eclipse. “She said we should post on Airbnb because she bet we could get a mortgage payment out of it,” he says. “I laughed, and randomly set up our house at what we thought was a large amount, never expecting to have a taker. We were booked within two days.”

Rasmussen charged $2,000 for a single night, as much as 10 times the typical price, which he guesses would be around $200 or $300. Other Oregonians contacted through Airbnb’s messaging system for this article tell similar stories.

The eclipse has been a hot topic along its path for months, with preparations taking many forms. California is prepping its solar-heavy power grid to deal with the temporary drop in sunlight. School districts situated on the eclipse path in Illinois and Missouri are canceling class on August 21. Kentucky officials are stocking up on a drug to treat heroin overdoses, and NASA is readying a raft of science experiments. In Oregon, as with other states on the eclipse’s cross-country path, government officials and locals are preparing for the swarm of visitors to these otherwise quiet, out-of-the-way towns. Residents have been told to prepare for power outages, cell-tower failures, internet-service outages, and other headaches. Gridlock figures to be a problem on roads across Oregon, straining emergency services.

“We’re planning for a very busy few days,” Peter Murphy, a spokesperson for the Oregon Department of Transportation, said in an email. He said 23 two-person crews will be stationed every few miles along U.S. Route 97, ready to remove any vehicles causing delays. “A lot will depend upon weather that day. Clouds on the coast or Willamette Valley may send travelers our way, and that will be a challenge to manage. Our message is ‘Arrive early, stay put, and leave late.’”

Rasmussen's father-in-law works on the coast, where officials are also anticipating traffic will slow to a halt. “From what I hear about the mid-Oregon coast, 50 miles west of us, they are going to station ambulances with supplies every few blocks along U.S. 101,” Rasmussen says, because authorities anticipate the roads, campgrounds, and beaches will be so jammed that ambulances may have a hard time getting through.

With a forecast like that, it’s no wonder some people are heading out of town—and hoping to profit in the process.

Zachary Burns of Redmond, a town located about 100 miles farther inland than Keizer, recalls his telescope-hobbyist father first told him about the eclipse about a year ago. He has heard varying estimates for how many visitors are coming to his part of central Oregon, but even the most conservative guesses are in the hundreds of thousands.

“I originally had the thought to leave town and camp to avoid the crowds,” he says. “Then I realized every camping spot was full. Then I heard every hotel was full. This was months ago, almost six months prior to the eclipse. That’s when I realized it might be a good way to bring in a little extra income.”

Burns rented his home for the night before the eclipse for $1,200. He figures it would likely go for $300 a night on any other weekend. Four times the typical rate isn’t bad, but he says he may have actually underbid himself. “As the date draws closer and prices rise, I’ve realized I could have listed the house for possibly $2,000 a night,” he says.

Others are opening up their property as a campground for multiple visitors. Bethany Stelzer from the town of Kimberly, another 100 miles inland from Redmond, said her family is offering campsites on the grounds of her orchard for $1,500 a night—each.

“I’ve heard from several locals that they are renting their homes or fields as campsites,” she says. “It was my understanding that the Oregon state parks sold out all of their campsites in the path of the eclipse fairly quickly, so the community stepped up to offer more options.”

For his part, Burns says he ultimately decided against camping during the eclipse, instead opting to stay with his parents. He made the decision in part because anywhere he could hope to find will be teeming with people. Then there are safety considerations; he worries any natural disasters, like late-summer wildfires, could make traffic jams even more dangerous.

Extra money from a night’s rental or a guided tour isn’t life-changing, but any extra bit helps given central Oregon’s relatively high cost of living, residents say. The money is also a nice tradeoff for the disruption the eclipse—or, more accurately, all the people headed to see it—will bring.

Some residents of central Oregon are more optimistic about the small-town welcome visitors will receive. Shawn Stanfill lives in Madras, about 25 miles north of Redmond. The father of Madras police chief Tanner Stanfill, he says law enforcement have taken the necessary steps to be ready to help in any situation, and he insists residents of small towns are always willing to lend a hand. The elder Stanfill is listing a couple properties for $1,500 a night, though he cautions those looking to view the eclipse from a relatively remote location like Madras need to be prepared to spend an extra two days getting in and another two getting out, so extreme will be the glut of visitors.

Zachary Burns says his employer, a restaurant supply company, has been ordering supplies weeks in advance to ensure eateries have what they need—though no one really knows what to expect. “I’ve heard varying estimates of the number of people coming to the area, but even lower estimates seem to think in the hundreds of thousands,” he says. “Should be an adventure.”

While the eclipse has become a serious moneymaking opportunity for many Oregonians, as well as those in states eastward along the eclipse’s path, those contacted for this article say they still intend to make the most of the eclipse itself. One resident said he plans to hike up a mountain trail known only to locals, where he’ll be unlikely to run into any out-of-town sky-gazers. Considering all the commotion and chaos that the eclipse could bring, they’ve probably earned at least that small measure of solitude.

Why an Anti-Fascist Short Film Is Going Viral
August 14th, 2017, 01:24 PM

How should Americans fight against a resurgent white-nationalist movement in the United States? This weekend, they returned to an artifact from an earlier era of anti-Nazism. Tens of thousands of people rediscovered—and promptly shared and retweeted—a clip from Don’t Be a Sucker, a short propaganda film made by the U.S. War Department in 1943.

When it first debuted, Don’t Be a Sucker would have played in movie theaters. Now it has made its 21st-century premiere thanks to a network of smaller screens and the Internet Archive, where it is available in full. Almost 75 years after it was first shown, Don’t Be a Sucker lives again as a public object in a new and strange context.

Its opening clip is a direct and plain-language parable in anti-fascism. It begins as a flushed man brandishes a pamphlet and addresses a crowd: “I see negroes holding jobs that belong to me and you. Now I ask you, if we allow this thing to go on, what’s going to happen to us real Americans?” He proceeds to blame blacks, Catholics, Freemasons, and immigrants for the nation’s ills.

“I’ve heard this kind of talk before, but I never expected to hear it in America,” says an older man with an Eastern European accent.

He introduces himself to a younger man next to him: “I was born in Hungary but now I am an American citizen. And I have seen what this kind of talk can do—I saw it in Berlin. I was a professor at the university. I heard the same words we have heard today.”

“But I was a fool then,” he continues. “I thought Nazis were crazy people, stupid fanatics. Unfortunately it was not so. They knew they were not strong enough to conquer a unified country, so they split Germany into small groups. They used prejudice as a practical weapon to cripple the nation.”

There ends the viral clip. But the original, 17-minute film Don’t Be a Sucker—which can be viewed in full below—continues, slipping into a short history of the rise of the Nazi Party in Germany. We see the movement evolve from an angry group of men in the streets to a party organization armed with an official state paramilitary. There’s a montage of Nazi crimes: A Jewish shop owner is carried away by police officers, a group of union members are attacked, and a college professor is arrested after telling his students that there is no scientific basis for the existence of a “master race.” (The version below is from the film’s 1947 rerelease.)*

Michael Oman-Reagan, an anthropologist and researcher in British Columbia, was the first to post the clip on Saturday evening, in a tweet comparing the orator’s rhetoric to President Donald Trump’s. His post has since been retweeted more than 85,000 times.

But he was not alone in linking the events in Charlottesville to the Second World War. Orrin Hatch, a Republican of Utah and the president pro tempore of the Senate, said in a tweet on Saturday: “We should call evil by its name. My brother didn’t give his life fighting Hitler for Nazi ideas to go unchallenged here at home.”

What makes the film so remarkable? It’s not as if Don’t Be a Sucker encapsulates some lost golden age of American anti-racism. Indeed the contradictions of the 1940s are inseparable from the film. In its opening montage, it shows a multiethnic group of kids—white, black, and East Asian—playing baseball. Yet in 1943, the same year it was released, the U.S. federal government kept more than 100,000 Americans imprisoned solely for the crime of being Japanese. And it was on its way to implementing one of the great anti-black wealth transfers of American history.

Still, Don’t Be a Sucker seems wise. It seems to know how democratic solidarity falters, how prejudice and factionalism can fracture a nation, and how all these forces might manifest in the United States of America. This wisdom may have emerged from simple practicality: Though the U.S. Army and Navy remained segregated for another five years, they were already vast and diverse enterprises by 1943. Simply put, different people had to work together to win the Second World War. The same was true of the whole country.

And in that, Don’t Be a Sucker may point to a deeper driver of the American experiment in multi-ethnic democracy. Building a diverse commonwealth has never been just an idealistic aspiration or moral avocation. It has been a requirement of the republic’s survival—the sole remedy to the cancer of white supremacy.


* This article has been updated to clarify that the clip is from the 1947 rerelease of the film.

A Question for Google's CEO
August 11th, 2017, 01:24 PM

When CEO Sundar Pichai addressed a controversial memo about diversity that circulated inside Google, culminating in the termination of its author, James Damore, he began by telling the company’s 72,053 employees that “we strongly support the right of Googlers to express themselves, and much of what was in that memo is fair to debate, regardless of whether a vast majority of Googlers disagree with it.”

“However,” he added, “portions of the memo violate our Code of Conduct and cross the line by advancing harmful gender stereotypes in our workplace. Our job is to build great products for users that make a difference in their lives. To suggest a group of our colleagues have traits that make them less biologically suited to that work is offensive and not okay. It is contrary to our basic values and our Code of Conduct.”

I have a question for the CEO.

Given that the full text of the memo is public, that it is the subject of a national debate on an important subject, that many educated people disagree with one another about what claims it made, and that clarity can only help Google employees adhere to the company’s rules going forward, would you be willing to highlight the memo using green to indicate the “much” that you identified as “fair to debate” and red to flag the “portions” that you deemed Code-of-Conduct violations?

Absent that, it seems to me that Google employees will remain as uncertain as ever about what they can and cannot say at the company. As an illustration, consider Alan Jacobs, an English professor at Baylor University who declares himself confused about your meaning:

Google’s position could be:

  • All studies suggesting that men-taken-as-a-group and women-taken-as-a-group have measurably different interests or abilities are so evidently wrong that any attempt to invoke them can only be indicative of malice, bad faith, gross insensitivity, or other moral flaws so severe that the person invoking them must be fired.
  • At least some of those studies are sound, but the suggestion that such differences could even partly account for gender imbalance in tech companies like Google is so evidently wrong that any attempt to invoke them can only be etc. etc.
  • At least some of those studies are sound, and very well may help to account for gender imbalance in tech companies like Google, but saying so inflicts so much emotional harm on some employees, and creates so much internal dissension, that any attempt to invoke them can only be etc. etc.
  • We take no position on any of those studies, but fired James Damore because of other things he said.

I think those are the chief options.

Actually, I can think of still more options—especially if only tiny “portions” of the memo crossed Google’s line—which only underscores the dearth of clarity available to your employees. As a general matter, for example, I wonder if you believe the truth of a proposition is relevant to whether it violates the Code of Conduct. Can something be both the scientific consensus on a subject and unmentionable?

Jacobs adds, “I seriously doubt that Google will get much more specific. Their goal will be to create a climate of maximal fear-of-offending, and that is best done by never allowing employees to know where the uncrossable lines are. That is, after all, corporate SOP.” I’d guess legal incentives are a more powerful motivator of strategic vagueness. Are we being too cynical? Over the course of its history Google has often struck me as a unique company. And surely elevating clarity here would fulfill the mission of making all pertinent information universally accessible and useful.

The Decline of the American Laundromat
August 10th, 2017, 01:24 PM

Lavanderia, one of San Francisco’s largest laundromats, is an urban relic. Its peeling aquamarine walls house some 110 machines. Telenovelas play on a TV and arcade games from the 1990s are tucked into unexpected nooks. After opening in 1991, Lavanderia—like so many other laundromats in big cities—became a social hub in a neighborhood where renters lacked the space or funds for their own machines.

But, again like so many other laundromats in big cities, Lavanderia’s future is uncertain. While families have been hauling their dirty towels, sheets, and underwear there for decades, the business’s future earnings now pale in comparison to the value of the land it sits on—rents have skyrocketed in recent

years in the Mission District, the historically Latino neighborhood where Lavanderia, whose name means laundromat in Spanish, is located.

In its heyday, the 5,200-square-foot laundromat brought in over $1,000 a day in quarters. But in the past decade, its owner, a wealthy tech entrepreneur named Robert Tillman, has seen revenues dry up. Business was so bad at his nine other Bay Area laundromats that he sold them off over the years. Lavanderia is the only one Tillman has left, and he’d like to turn it into a 75-unit apartment building, with some units generating as much as $55,000 each year.

The erosion of Tillman’s laundromat business is a side effect of a national trend: Developers are remaking urban neighborhoods across the country, constructing apartment buildings for waves of young, wealthy workers and installing washers and dryers in each unit, leaving local laundromats without clientele. “Offering a washer and dryer in-unit is a trend we’re certainly seeing,” says Paula Munger, the director of industry research for the National Apartment Association. A recent survey by the industry group found the addition of washers and dryers to be one of the most common upgrades to apartments in recent years.

That has posed a problem for laundromats. According to data from the Census Bureau, the number of laundry facilities in the U.S. has declined by almost 20 percent since 2005, with especially precipitous drops in metropolitan areas such as Los Angeles (17 percent) and Chicago (23 percent). (While that data includes both laundromats and dry cleaners, laundromats account for the bulk of the drop.) In the disappearance of laundromats, a longtime staple of urban living, one can detect yet another way that cities have changed in response to an influx of higher-earning residents.

Collectively earning $5 billion each year, as estimated by the Coin Laundry Association, the U.S.’s coin-operated laundromats are overwhelmingly mom-and-pop operations and share a tightly knit history with the American city. While the first self-serve laundromat opened in 1934 in Fort Worth, Texas, the industry didn’t really take off until the ’50s, after many cities became more densely populated. “Like our mantra goes, ‘The more people, the more dirty clothes,’” Brian Wallace, the president of the Coin Laundry Association told me. Technological leaps in washer and dryer efficiency during the ’80s allowed the industry to expand even more. Laundromats, communal spaces that brought people together to perform a mundane chore, became a fixture of the urban experience, with Hollywood using them to stage serendipitous meetings, as it did in the 1985 film The Laundromat.

Robert Tillman, the owner of Lavanderia, a laundromat in San Francisco’s Mission District (Marc Vartabedian)

By the ’90s, the industry was strong enough to attract Tillman’s attention. Tillman, who’s now 61, earned his first fortune from tech ventures—among other things, he was behind DigitalGlobe, a satellite-imaging company that has supplied orbital views of earth to Google and the U.S. government. Back on the ground, Tillman, a graduate of Stanford’s business school, recognized that people still needed clean clothes, the age of satellites notwithstanding. Soon, he came to own 18 laundromats. “The ’90s were a great time for the laundry business,” Tillman recalls.

After the dot-com bubble burst, San Francisco’s rapid inflow of wealth has hurt his businesses. Lavanderia’s revenue has slid 33 percent since 2004, according to the business’s accounting records. Terry Smith, who repairs machines and collects the quarters at Lavanderia and other Bay Area laundromats, anecdotally reports that lately he’s been dumping fewer coins into his jingling collections sack as he makes his rounds. Even Tillman’s eight laundromats in Albuquerque were impacted by the urban transformations seen across the country. “I saw where the business was headed,” Tillman says. By 2007, he sold off all his laundromats except for Lavanderia—part of a 15-percent drop in the number of the Bay Area’s laundry facilities since the early 2000s, according to Census Bureau data.

Laundromats’ margins are further thinning as the price of water and sewage services have risen across the country. Utilities make up by far the heftiest of Lavanderia’s expenses, costing over $100,000 each year. Add to that the roughly $30,000 Tillman spends fixing his aging washers and dryers, and the laundromat is left with about $140,000 of profit each year, a number that continues to dwindle.

Customers do their laundry at Lavanderia (Marc Vartabedian)

At the same time, laundromats were never a great bargain for low-income customers. Families can do multiple loads of laundry a week, and at a laundromat, that can cost $100 or more per month. There is rarely an alternative: Landlords are typically reluctant to install the right plumbing and hookups in already cramped apartments in what are often older buildings.

With this calculus in mind, Tillman would like to turn Lavanderia into a six-story apartment building that few of his current customers could likely afford to live in. He is hardly the first laundromat owner to conceive of such a plan. As Adam Lesser, the owner of Fiesta Laundromat, just a few blocks from Lavanderia, puts it, “I’m over here cleaning out lint. What the hell am I doing?”

Tillman filed a proposal with San Francisco’s planning department outlining his intentions three and a half years ago, but the project has stalled in the face of anti-development activism. Erick Arguello, a longtime Mission resident, heads one of the groups opposing Tillman’s project. He has seen one laundromat after another close in the neighborhood in the past several years: Super Lavar, where his family of seven used to go, turned into an upscale restaurant. Cleaner Wash, a small laundromat also in the Mission, was bought for over $1.5 million and turned into a high-end gym. “We have large families and you have to walk three or four blocks to go do your laundry,” Arguello says. “You also lose that sense of community. The laundromat was a family affair growing up.”

Meanwhile, as the project is on hold, Tillman has been putting off much-needed repairs. These days he’s trying to drum up local support at community meetings in the Mission so he can finally raze Lavanderia. Not all laundromat owners are pursuing Tillman’s route, though. In June, at the Coin Laundry Association’s conference, held every two years in Las Vegas, owners explored ways to make laundromats more appealing to a hipper clientele, such as by offering wi-fi. “The representations have been more positive and younger—a place where young people meet,” Wallace says, adding that the industry faces further challenges from on-demand laundry apps. Some laundromats have already morphed into cafes where customers can drink craft beer or sip a latte while waiting for their loads to finish. That may be a positive turn for the coin-laundry industry, but it does sound a whole lot like what’s happening to Lavanderia’s neighborhood anyway.

These Scientists Took Over a Computer by Encoding Malware in DNA
August 10th, 2017, 01:24 PM

DNA is fundamentally a way of storing information. Usually, it encodes instructions for making living things—but it can be conscripted for other purposes. Scientists have used DNA to store books, recordings, GIFs, and even an Amazon gift card. And now, for the first time, researchers from the University of Washington have managed to take over a computer by encoding a malicious program in DNA.

Strands of DNA are made from four building blocks, represented by the letters A, C, G, and T. These letters can be used to represent the 1s and 0s of computer programs. That’s what the Washington team did—they converted a piece of malware into physical DNA strands. When those strands were sequenced, the malware launched and compromised the computer that was analyzing the sequences, allowing the team to take control of it.

“The present-day threat is very small, and people don’t need to lose sleep immediately,” says Tadayoshi Kohno, a computer security expert who led the team. “But we wanted to know what was possible and what the issues are down the line.” The consequences of such attacks will become more severe as sequencing becomes more commonplace. In the early 2000s, it cost around $100 million to sequence a single human genome. Now, you can do it for less than $1,000. The technology is not just cheaper, but also simpler and more portable. There are even pocket-sized sequencers that allow people to analyze DNA in space stations, classrooms, and jungle camps.

But with great ubiquity comes great vulnerability. DNA is commonly used in forensics, so if troublemakers could hack sequencing machines or software, they could change the course of an investigation by altering genetic data. Or, if machines are processing confidential data about genetically modified organisms, hackers could steal intellectual property.

There’s also the matter of personal genetic data. The United States is currently trying to sequence the DNA of at least 1 million Americans to pave the way for precision medicine, where treatments are tailored to an individual’s genes. “That data is very sensitive,” says Peter Ney, a student in Kohno’s lab. “If you can compromise [the sequencing pipeline], you could steal that data, or manipulate it to make it seem like people have genetic diseases they don’t have.”

“We want to understand and anticipate what the hot new technologies will be over the next 10 to 15 years, to stay one step ahead of the bad guys,” says Kohno. In 2008, his team showed that they could wirelessly hack their way into a heart implant, and reprogram it to either shut down or deliver debilitating jolts. In 2010, they showed that they could hack into the control system of a Chevrolet Impala, taking control of the car. Then, they turned their attention to DNA sequencing. “It’s an emerging field that other security researchers haven’t looked at, so the intrigue was there,” says Kohno. “Could we compromise a computer system with DNA biomolecules?”

They could, but reassuringly, it wasn’t easy. To make their malware work, the team introduced a vulnerability into a program that’s commonly used to analyze DNA data files. They then exploited that weakness. That’s a bit of a cheat, but the team also showed that such vulnerabilities are common in software for analyzing DNA. The people who created these programs didn’t really have hacking in mind, and so their products tend to be insecure, and rarely follow best practices for digital security. With the right molecular malware, it could be possible for adversaries to compromise these programs and the computers that run them.

“I liked the creativity a lot, but their exploit is unrealistic,” says Yaniv Erlich, a geneticist at Columbia University and the New York Genome Center. (Earlier this year, Erlich encoded a computer virus in DNA, but he didn’t code it so that it would launch on its own when the DNA was sequenced.) In practice, the team’s malware would create a glitch that most sequencing centers would spot and fix. An adversary could only assume control of a compromised computer if they had impeccable timing, and struck immediately after the strands were sequenced.

Still, Erlich agrees that programs for analyzing DNA have “relatively relaxed security standards.” There are rumors, he says, that one big research institution was hit by ransomware, because they used the default admin passwords on their sequencing machines.

“My hope is that over the next 5 to 10 years, people take a strong interest in DNA security, and proactively harden their systems against adversarial threats,” says Kohno. “We don’t know of such threats arising yet and we hope that they’ll never manifest.”

Radio Atlantic: Ask Not What Your Robots Can Do for You
August 9th, 2017, 01:24 PM

Our increasingly smart machines aren’t just changing the workforce; they’re changing us. Already, algorithms are directing human activity in all sorts of ways, from choosing what news people see to highlighting new gigs for workers in the gig economy. What will human life look like as machine learning overtakes more aspects of our society?

Alexis Madrigal, who covers technology for The Atlantic, shares what he’s learned from his reporting on the past, present, and future of automation with our Radio Atlantic co-hosts, Jeffrey Goldberg (editor in chief), Alex Wagner (contributing editor and CBS anchor), and Matt Thompson (executive editor).

Links:

Further Reading:

The Moral History of Air-Conditioning
August 9th, 2017, 01:24 PM

Until the 20th century, only the wealthy or dying might have witnessed someone trying to cool the air indoors—even though building a fire to keep warm in the winter would have been perfectly reasonable. Extreme heat was seen as a force that humans shouldn’t tamper with, and the idea that a machine could control the weather was deemed sinful. Even into the early 1900s, the U.S. Congress avoided the use of manufactured air in the Capitol, afraid voters would mock them for not being able to sweat like everyone else.

While adoption of air-conditioning demanded industrial ingenuity, it also required renouncing the vice of cooling the inside air. But in the process of shedding its hypothetical moral slight against the heavens, the air conditioner has perpetrated worse, actual sins against the Earth.

* * *

Despite the shadow of immorality, breakthroughs in air-conditioning developed out of desperation. Doctors scrambling to heal the sick took particular interest. In 1851, a Florida doctor named John Gorrie received a patent for the first ice machine. According to Salvatore Basile, the author of Cool: How Air-Conditioning Changed Everything, Gorrie hadn’t initially sought to invent such an apparatus. He’d been trying to alleviate high fevers in malaria patients with cooled air. To this end, he designed an engine that could pull in air, compress it, then run it through pipes, allowing the air to cool as it expanded.

Outside of his office though, people saw no practical need for this achievement. It wasn’t until the pipes on Gorrie’s machine unexpectedly froze and began to develop ice that he found a new opportunity. Still, this accomplishment was lampooned as sacrilege in The New York Globe: “There is Dr. Gorrie, a crank ... that thinks he can make ice by his machine as good as God Almighty.”

The use of ice and snow to chill drinks or to help cool a room was nothing new. In the 17th century, the inventor Cornelius Drebbel used snow that had been stored underground during the summer to perform an act he called “turning summer into winter.” In his book Absolute Zero and the Conquest of Cold, Tom Shachtman speculates that Drebbel achieved his effect by mixing snow with water, salt, and potassium nitrate, which formed ice crystals and significantly cooled the space. King James, who invited Drebbel to demonstrate his innovation, reportedly ran from the demonstration in Westminster Abbey, shivering.

Ice would be used two centuries later to cool another man in power, U.S. President James A. Garfield. On July 2, 1881, Charles Guiteau fired two shots from his revolver into Garfield’s back. The aftermath inspired naval engineers to develop a method to keep a president cool, as he slowly died that summer.

The astronomer Simon Newcomb oversaw development of the apparatus that struggled to chill Garfield’s sickroom. Newcomb rigged together an engine connected to pipes that powered a fan to blow over a giant bucket of ice. In written reports, Newcomb explained that his apparatus held “some six tons [of ice] in all, through which the air might pass in one direction and return in the other.” The device lowered the room’s temperature from 95 to 75 degrees—and ate up hundreds of pounds of ice an hour.

As news of Newcomb’s machine slowly grabbed the public interest, distrust of cooling the air began to wane. Inventors developed fanciful schemes to beat the heat. One believed he could take a balloon connected to a fire hydrant and a hose and create personal rainstorms. Another came up with the idea of towers with carbon dioxide bombs at the top that would explode above a neighborhood and cool the air upon detonation. Some of these curiosities managed to win patents, but few proved useful in practice.

* * *

Two decades after Garfield’s death, Willis Carrier coined the term “air-conditioning.” Although it wasn’t an overnight sensation, Carrier’s breakthrough came in July 1902, when he designed his Apparatus for Treating Air, first installed in the Sackett Williams Publishing building in Brooklyn, New York. The device blew air over tubes containing a coolant. Its purpose was to reduce humidity more than to reduce air temperature; excess water in the air warped the publishing house’s paper.

In 1899, Alfred R. Wolff had preceded Carrier with an air-cooling device, installed in the dissecting room of Cornell Medical College in New York City. Later, the same year Carrier installed his first apparatus in Brooklyn, Wolff placed his machine at the New York Stock Exchange. Instead of keeping cadavers fresh for study, it brought comfort to the horde of men at work.

The technology began to spread. Frigidaire sold the first “room cooler” for the home in 1929. H.H. Schultz and J.Q. Sherman marketed an air conditioner that leaned against the windowsill, but the first window-mounted unit, as we know it today, was the 1932 Thorne Room Air Conditioner. It looked like the grill of an old car shoved through a window. In her book Cool Comfort: America’s Romance with Air-Conditioning, Marsha Ackermann recounts a radio interview in which Carrier announced his vision. He imagined a world in which “the average businessman will rise, pleasantly refreshed, having slept in an air-conditioned room. He will travel in an air-conditioned train, and toil in an air-conditioned office.”

Air-conditioning’s major public debut was at the 1939 World’s Fair. Carrier hosted the Carrier Igloo of Tomorrow expo, where 65,000 visitors would experience air-conditioning for the first time, boosting consumer interest. Over the next decade, as the air conditioner shrank in size, advertisements for the machine shifted their appeals from men in the workplace to women at home. In some early ads the air conditioner sits in the window among a proud family admiring their machine like a spacecraft that had landed in the living room.

Basile points out another, less obvious move that increased the device’s popularity: In 1959, the U.S. Weather Bureau created its “discomfort index”—we know it today as the heat index, a measure of temperature and humidity combined. The discomfort index gave an unexpected boost to air-conditioning by, as Basile says in his book, putting “people in mind of cooled air.” Now the public could gauge if it was too hot to go outside. If they could afford it, there were plenty of air-conditioner manufacturers offering solace from the weather.

By the 1960s, millions of air conditioners were being sold every year in the United States. Windows across cities and suburbs were being plugged with the machines. As of 2011, the Energy Information Administration’s Residential Energy Consumption Survey says that 87 percent of households in the United States have an air conditioner or central air. That’s compared to 11 percent in Brazil and only 2 percent in India.

* * *

While the public’s reluctance to air-conditioning might have hampered the initial development of air-conditioning technologies, its eventual popularity has proved detrimental to the Earth’s atmosphere.

By 1989, the Montreal Protocol was enacted in an effort to cut the release of chlorofluorocarbons, or CFCs, into the atmosphere. Freon, a CFC used in the early A/Cs, was among the features of older air-conditioning units that contributed to ozone depletion.

Even though refrigerants have been modified to use fluorine instead of chlorine, and thereby to avoid impacting ozone, air-conditioning still exerts enormous environmental impact. According to Daniel Morrison, the acting deputy director of communications at the U.S. Department of Energy, residential and commercial buildings used more than 500 billion kilowatt-hours of electricity for air-conditioning in 2015 alone. That’s almost 20 percent of the total electricity used in buildings, amounting to $60 billion in electricity costs annually. Air-conditioning is also one of the main contributors to peak electric power demand, one symptom of which is rolling summer blackouts.

Of all the devices people use today, the air conditioner has not experienced major design makeovers like the television or the telephone. But there are companies trying to revolutionize the future of air-conditioning—both in aesthetics and efficiency. Some of these efforts rehearse earlier qualms about the unseemliness of interior cooling, making air-conditioning more personal. CoolWare, for example, makes an A/C collar, which wraps around the neck and delivers water-cooled air via small fans. Wristify offers a similar product as a bracelet. Kuchofuku makes an air-conditioned work shirt of a similar design.

A Cyprus-based company called Evapolar has introduced what it calls “the world’s first personal air cooler.” It’s a small cube with a water reservoir and a fan that creates a breeze and purifies the air. Evapolar promotes the idea of a “microclimate” designed to cool a single person’s work or sleep space, and thereby to avoid wasting energy by cooling entire rooms or buildings. “Just as our phones became personalized, we believe that the climate device should also become personalized,” Evapolar spokesperson Ksenia Shults tells me.

Dyson and Xiaomi are also introducing small, personalized air purifiers into the market. All these devices remain niche (and fairly uncool, as it were), but stranger things have become mainstream.

Even today, air-conditioning remains controversial. Due to their environmental impact, some advocates call for disuse of these machines. Others accuse the air conditioner of chauvinism, forcing women in the workplace to dress one way inside and another outside. It has become both a symbol of human ingenuity and of weakness, acclimatizing human bodies so that they are less resilient against natural heat without the aid of machines.

More than just an appliance, the air conditioner is a memento mori. It was a device people invented to avoid a few individual deaths, and yet one whose adoption might have a role to play in the passing of a temperate climate for everyone. As summer proceeds, listen to the chorus of machines humming in the windows, outside the houses, atop the office buildings. They offer a reminder that humanity’s ingenuity can come at a cost. Maybe our forebears weren’t entirely wrong to see peril in the act of cooling the air.


This article appears courtesy of Object Lessons.

The Sound of an Atomic Bomb
August 9th, 2017, 01:24 PM

Popular imagery of the atom bomb is oddly sterile.

For all we know of the horrors of nuclear weapons, the visual that’s most often evoked is ethereal, if ominous: a silent, billowing cloud, aloft in black and white.

The reasons for this are understandable. Nuclear weapons have been tested far more often than they’ve been used against people. And the only two times they were used in warfare—in Hiroshima, then Nagasaki, 72 years ago—photographers captured many scenes of devastation, yet video recording was scant.

Survivors of the bombings have shared what they saw and heard before the terror. John Hersey’s famous report, published in 1946 by The New Yorker, describes a “noiseless flash.” Blinding light and intense pressure, yes, but sound? “Almost no one in Hiroshima recalls hearing any noise of the bomb,” Hersey wrote at the time. There was one person,  a fisherman in his sampan on the Inland Sea at the time of the bombing, who “saw the flash and heard a tremendous explosion,” Hersey said. The fisherman was some 20 miles outside of Hiroshima, but “the thunder was greater than when the B-29s hit Iwakuni, only five miles away.”

There is at least some testing footage from the era that features sound. It is jarring to hear. The boom is more like a shotgun than a thunderclap, and it’s followed by a sustained roar. Here’s one example, from a March 1953 test at Yucca Flat, the nuclear test site in the Nevada desert.

The National Archives description of the footage is matter-of-fact—which is the purpose of archival descriptions, but which seems strangely detached, considering: There’s the mountain ridge in early morning. An atom bomb is exploded. Burning. Pan of the mushroom against darkened sky. The cloud dissipates as the sky lightens. A yucca plant and Joshua trees in foreground. Hiller-Copters buzz in. And, finally, General John R. Hodge standing at a microphone, blinking into the morning sun.

“This test, I think, went very well,” he said. “I was quite interested in how the troops reacted. I didn’t find any soldier there who was afraid.”

“They took it in stride,” he added “as American soldiers take all things.”

The JCC Bomb-Threat Suspect Had a Client
August 8th, 2017, 01:24 PM

A federal court has unsealed new documents in the case against an Israeli teenager, Michael Kadar, who has been accused of making at least 245 threatening calls to Jewish Community Centers and schools around the United States. According to the documents, Kadar advertised a “School Email Bomb Threat Service” on AlphaBay, an online marketplace for illicit goods and services that was shut down by the federal government in July. Authorities have identified an individual in California who allegedly ordered and paid for at least some of Kadar’s threats.

A newly unsealed search warrant alleges that Kadar charged $30 for an email bomb threat to a school, plus a $15 surcharge if the buyer wanted to frame someone for it. “There is no guarantee that the police will question or arrest the framed person,” Kadar allegedly wrote in his ad.

I just add the persons name to the email. In addition my experience of doing bomb threats putting someones name in the emailed threat will reduce the chance of the threat being successful. But it’s up to you if you would like me to frame someone.

Kadar charged double for a threatening email to a school district or multiple schools, but districts with more than 12 schools required a “custom listing.” He noted that he was available “almost 24/7 to make emails,” and he promised to refund non-successful threats.

Kadar got good reviews. One AlphaBay user wrote that the threats were “Amazing on time and on target. We got evacuated and got the day cut short.” Based on the date when the comment was posted, it appeared to refer to a threat made to Rancho Cotate High School in Rohnert Park, California, north of San Francisco.

The Justice Department seized AlphaBay in late July—Attorney General Jeff Sessions called it “the largest dark net marketplace in history.” The documents in the Kadar case suggest that authorities had been tracking AlphaBay for a while: The search-warrant application alludes to screenshots of Kadar’s activity on the marketplace taken in mid-March.

It’s possible that the information discovered in the Kadar case contributed to the AlphaBay investigation. The Kadar documents were unsealed on July 19, the day before the Justice Department announced that AlphaBay had been shut down. Previously, the search warrant had been sealed because it was “relevant to an ongoing investigation into the criminal organizations as not all of the targets of this investigation will be searched at this time.” The search warrant and related legal documents were unsealed because the FBI and local authorities in California may need them to pursue criminal charges against the suspected buyer or buyers, or they may eventually be producible in the discovery phase of a criminal proceeding. The filings were first publicly flagged by Seamus Hughes, the deputy director of the Program on Extremism at George Washington University.

When Kadar was arrested in late March, members of the Jewish Community were shocked that an Israeli teenager appeared to responsible for many of the bomb threats that had forced Jewish Community Centers and schools to repeatedly evacuate their buildings last winter. Authorities arrested another suspect, Juan Thompson, in connection with some of the threats, but he appeared to make only a handful of the calls and was allegedly attempting to get revenge on an ex-girlfriend. The new documents suggest that even more people may have been involved as buyers—but how many, who, and why they did it are all not yet clear, and the document does not specifically state that any of the threats to Jewish institutions were issued at the behest of clients. So far, the investigation has led to a surprising pair of suspects. It’s not clear what kind of person will emerge as a suspect next.

Sage, Ink: The Damage-Control Doodle
August 8th, 2017, 01:24 PM

How Uber Is Building Uber for Trucking
August 8th, 2017, 01:24 PM

As Uber battles taxis and other ride-hailing apps in cities across the world, the company is beginning to move quickly into a much larger transportation market: trucking.

This spring, Uber unveiled Uber Freight, a brokerage service connecting shippers and truckers through a new app. Conceptually, “Uber for trucking” seems like a logical extension of the passenger transport business.

But the logistics industry has totally different dynamics. For one, it’s business to business. Most truckers are owner-operators or they’re part of very small companies with a handful of vehicles. The industry has well-established ways of doing things. Truckers basically work in the places where Uber’s ride-hailing service doesn’t. And unlike Uber’s ride-hailing service, the company can’t bring a huge new supply of drivers onto the market to change the dynamics of transportation. As it is, there are somewhere north of 3 million truck drivers in America, between long-haul and delivery.

Uber Freight was born out of the marriage of an internal team with members of Otto, after Uber acquired the latter company early last year. Since then, the teams have split up into self-driving research and development, managed by Alden Woodrow, formerly of Google X, and the Uber Freight team. Freight has a floor of one of Uber’s offices in downtown San Francisco and a large operations team in Chicago.

Uber has had a brutal last year. The company's culture has been critiqued from the inside and outside as sexist and fratty. The problems led to the ouster of a series of top executives, including founder Travis Kalanick. Even in trucking, Uber's acquisition of Otto has led to a lawsuit filed by Alphabet's self-driving car division, Waymo, related to the alleged theft of sensor technology. One Uber employee I know recently joked, "Uber's become a four-letter word."

I visited the company’s San Francisco office with Uber Freight’s product lead, Eric Berdinis. He’d come to Uber via Otto after a stint at Motorola working on the Moto 360 smartwatch, among other things. He graduated from the University of Pennsylvania in 2013, which makes him roughly 27 years old.

We walked the floor that is Berdinis’s domain. The engineering team is on the west side of the building, ops on the east. In the ops room, heat maps of America glowed on mounted televisions, showing where Uber is doing the most business. Texas was hot. This is certainly one of the places where software is nibbling away at the world.

Then we tucked into a conference room for an extensive interview. We talked through how to actually build “Uber for trucking,” what really hurts truckers, whether Otto oversold the speed at which self-driving trucks would arrive, and what drivers think of Travis Kalanick.

Alexis Madrigal: Let’s talk about Uber Freight and self-driving trucks. When Uber started, self-driving cars were pretty far away. When Uber Freight starts, perhaps self-driving trucks are not that far away. How much do you think self-driving trucks would change the economics for you guys?

Eric Berdinis: In my time at Otto, we did spend a decent amount of time thinking about the economics of trucking once it happens, even if it is a decade out. Now that I have been spending more time on the freight side, I haven’t been as close to that. But the teams are in communication about how these things might work together at some point.

Madrigal: And what is the relationship between the self-driving and Freight teams now?

Berdinis: They were born from a similar origin story. At least, I came from that team. The day-to-day workings are pretty separate. They are going down the path of finding their first customers and we’re scaling up the business and building the network. We’re in sync on what’s happening, but no active workstreams together like that.

Madrigal: Are you hearing from drivers that they are worried about it?

Berdinis: You see that come up every once in a while.

Madrigal: I know this isn’t what you’re doing on a day-to-day basis anymore, but how could you see the automation playing out?

Berdinis: I’ll first start by saying that one of the last things I worked on on the Otto side was the Otto-Budweister partnership and the video and the whole thing around that. Once I joined Uber Freight full time, I was thinking to myself, “We really made it seem like this thing was coming sooner than it is. We probably scared a lot of people. We kind of hyped this thing up.”

And it is showing what the future will be like. But it won’t be coming as fast as the video made it seem. The reality is that the transition to any kind of self-driving truck future is quite a ways away.

But in terms of how we think about that future, we actually do see a future where jobs don’t get impacted in the way that people expect them to. We wouldn’t be doing Uber Freight, which is a human-driven product, if we didn’t think that there was a responsible way for the future to look with humans and self-driving trucks.

Madrigal: Can you describe the future you see where there are autonomous trucks but jobs are not negatively impacted?

Berdinis: The answers aren’t perfectly clear yet, but the way that we’re building out this product is heading toward a direction that is the most driver-friendly possible. Once we have a more defined plan for how self-driving trucks and Uber Freight could work together, the specific will be more clear. There are lots of path that that could happen. Nothing to go into detail on now.

Madrigal: Has the recent trouble at Uber affected you all more or less than the standard employee at the company?

Berdinis: Uber Freight, because it has been incubated from the beginning with the Otto acquisition, we’ve always had really strong leadership internally. So, there has not been a huge impact from any of the searches for COO or CEO. The board is extremely excited about freight. They love having Uber with a diverse set of business opportunities. It hasn’t affected shippers. It hasn’t affected drivers. If you asked a driver, “Did you hear about Travis Kalanick?” They’d be like, “What are you talking about?”

Madrigal: But you did have a big departure from Otto in [founder Anthony] Levandowski. And there’s the Waymo lawsuit. Does that affect you guys on the Uber Freight ops side?

Berdinis: It really doesn’t. Because there are no self-driving components to Uber Freight. We definitely get questions like you’re asking me now. But it’s not like our technologies have anything to do with self-driving.

Madrigal: How did Uber Freight get started?

Berdinis: Curtis Chambers, who I think was the #7 employee at Uber, was tasked with exploring new opportunities in transportation. He was there for the start of uberX. He started UberEATS. Then, around the time that Otto was started, which was January/February of last year, Curtis was off with a few salespeople and engineers talking to trucking companies and starting to figure out if Uber should get into trucking. With the Otto acquisition, that solidified. The team we had created and the team Curtis had created—3 or 4 people on each side—we said, okay, let’s build out Uber Freight.

Madrigal: And the model you settled on is that Uber Freight essentially works as a broker between people with stuff to ship and truckers?

Berdinis: There is a defined model for how you build a company in the brokerage industry, which is the middle man between shippers and carriers. There have been a lot of brokers that have come along since the 1980s, when brokers became a formal thing.

Madrigal: Because of deregulation.

Berdinis: Right. So, there was a playbook for that. But it was completely unknown for how we do this in a tech-forward way that doesn’t totally follow the normal step-by-step that a brokerage would go after.

Madrigal: Which would just be lining up both sides of the marketplace, getting loads and getting trucks.

Berdinis: You make a promise to a shipper and hustle to find a driver and then, boom, that’s your first load. You just keep doing that at scale. It’s very easy to do it manually because you’re just calling and negotiating. You can muscle through that. But how do you get drivers to use an app or embrace a new way of doing things, especially when: 1) These drivers don’t really use technology in their day-to-day lives, and 2) when we’re really small, they log in and there are like five loads. That’s not a very useful product. So how do we get past the chicken-and-egg problem to the point where we are today when drivers come back every single day. And some of them are 100 percent on Uber Freight, like they completely transitioned their business.

Madrigal: How did you do that?

Berdinis: We didn’t actually put the apps out into the stores, the point where you can log in and book a load. That wasn’t until February for Android and March for iOS. So between September when we moved the first load until February, it was a lot of manual work, old-school hustle, get the loads, get the drivers.

Madrigal: Did you hire people from the other brokers?

Berdinis: Yeah, for sure. Uber has a very specific kind of ops executor. The Uber-style ops executor is very analytical, lot of them from finance backgrounds. They can work very hard and think through problems in a very analytical, data-driven way. And then there is the brokerage-style ops person, who is much more on the execution side. They know the industry really well. They can hear in the driver’s tone of voice if they are lying about a flat tire or just delayed from their previous shipment. All that kind of stuff. So, marrying the two kinds of operators together helped us build that ops team.

Madrigal: You guys decided to regionally build out. So the first market is ... Dallas?

Berdinis: We call it the Golden Triangle: Dallas, Houston, San Antonio. There are pretty even flows of freight in and out of each of those cities. So if we can capture that triangle, as soon as you drop off a load in Dallas, you can pick up a load to either Houston or San Antonio. There are other kinds of natural triangles around the country, but just within that triangle area, that makes up about 10 percent of the country’s freight.

Madrigal: Relative to other brokerages, you’re better capitalized and possibly better organized, and you don’t have to make money right away. There are a lot of advantages you guys have in going into a market like that. But what were the hard things about it?

Berdinis: When we we were starting up, before we publicly launched, most of the drivers we had talked to had never heard of Uber. They operate between cities and between cities, Uber doesn’t exist. So, it’s not top of mind. Once we did launch publicly, we started to see the camaraderie with their taxi friends. But you also hear other drivers coming and pushing back against them, saying, “With taxis, they created new supply. And that’s why there is competition. With trucking, Uber Freight is not creating new truck drivers.” We’re actually just giving loads in a more efficient way. We’re paying quickly. Over the last nine months, we’ve gotten pretty deep in the crazy pain points that drivers have and are going one by one to knock them off.

Madrigal: What are those?

Berdinis: It all comes back to earnings at the end of the day.

Madrigal: Because they are small business owners.

Berdinis: And drivers get paid from shippers, net 30– and net 60–day terms. [Meaning, the people shipping stuff have 30 or 60 days from the work being completed to pay the truckers.] If their truck breaks down, they struggle with the 60-day terms because they are working week to week. As a result, there is this huge industry called “factoring,” it’s kind of like payday loans. The trucker says, “Give me 95 percent of this receivable but today, versus making me wait 60 days.” That’s just 2–5 percent skimmed off of every single load. And when these drivers are only making a few percent profit margin, that could be all the profit they are making. The whole payment process does not work and it is causing a lot of trucking companies to go out of business.

Madrigal: What else has surprised you in making this foray into logistics?

Berdinis: I’ll start with the app itself. There was lots of apprehension at the beginning when we started calling drivers. A, they’d never heard of Uber. So the sell was hard. And B, a lot of them had never downloaded an app. They might have an iPhone, but we’d say, “Go to the App Store.” And they’d say, “What is that?” It was 45 minutes per driver walking them through the download process and password. We started to think that if we had to do this for tens of thousands of drivers, we could never scale.

But as time went on, drivers started showing up to us, instead of us going to them, it started self-selecting for drivers who know how to use apps and get it. The usage of the app was far exceeding what our expectations were.

We are seeing that not only are the ones who booked loads with us booking more loads every single week that they come back. But drivers who never booked a load with us, continued opening up the app almost every single day to check for new opportunities. We saw this crazy engagement. There’s not a lot of ways for drivers to see what loads are available out there. And just having the list and the price—that visibility in and of itself—is a huge mental shift.

Madrigal: It’s almost like the early stories around cell phones and farmers in whatever country being able to check the prices at market.

Berdinis: This is like that for a lot of these truck drivers. We were super skeptical that the divers would know how to use the app. But whether it is self-selection or whatever, we found this incredible affinity to come back and check more. We were pleasantly surprised by that.

Madrigal: Where does all this go from here? What are the next steps?

Berdinis: Texas was our original focus and yesterday, we announced six new states or regions we’ll do our same kind of density play in. That’s gonna help us understand if we can replicate the success we’ve seen in Texas in these other markets.

Madrigal: Do you have a GM for those markets, the way Uber’s passenger business would?

Berdinis: We don’t have a GM for those markets. It is all centralized from the ops team. When Uber launches new cities, they have a GM. They have a pretty standard process. For Freight, it’s not exactly like that. Because freight moves between cities. And the lines are not as clean as “Here’s Los Angeles that’s launched.”

Madrigal: Okay, that’s one shift. What’s the other big thing?

Berdinis: Up until now, the way drivers interacted with the app, they have to go into the app and search for what they want. We’ve learned that drivers have pretty specific preferences. Uber drivers don’t really have preferences. Maybe they can use this tool that helps them drive home at the end of the day. But during the day, the whole city is where they are working. With truck drivers, you can’t tell a local driver, someone driving within 100 miles of their home, to go take a load from Houston to New York. There is no point to surfacing that. So, now the app is a lot more proactive about personalizing that search-and-discovery experience.

So, we’re sending out push notifications. Hey, this load is one you’ve taken before. It just showed up on our system. Do you wanna book it? And then when they get into an app, there is a whole For You section. It’ll say: Recommended Because The Load Will Take You Home. Recommended Because You’ve Done This Load Before.

Madrigal: Netflix-style.

Berdinis: Right.

Madrigal: Are there other companies that want to digitize the brokerage business?

Berdinis: There’s a few “Uber for trucking companies.” They don’t call themselves that anymore, but they used to. There’s Transfix out of New York, Convoy out of Seattle. And if you search “Uber for trucking” you’ll see dozens that came and went.

When Silicon Valley Took Over Journalism
August 8th, 2017, 01:24 PM

Chris Hughes was a mythical savior—boyishly innocent, fantastically rich, intellectually curious, unexpectedly humble, and proudly idealistic.

My entire career at the New Republic had been spent dreaming of such a benefactor. For years, my colleagues and I had sputtered our way through the internet era, drifting from one ownership group to the next, each eager to save the magazine and its historic mission as the intellectual organ for hard-nosed liberalism. But these investors either lacked the resources to invest in our future or didn’t have quite enough faith to fully commit. The unending search for patronage exhausted me, and in 2010, I resigned as editor.

Then, in 2012, Chris walked through the door. Chris wasn’t just a savior; he was a face of the zeitgeist. At Harvard, he had roomed with Mark Zuckerberg, and he had gone on to become one of the co-founders of Facebook. Chris gave our fusty old magazine a Millennial imprimatur, a bigger budget, and an insider’s knowledge of social media. We felt as if we carried the hopes of journalism, which was yearning for a dignified solution to all that ailed it. The effort was so grand as to be intoxicating. We blithely dismissed anyone who warned of how our little experiment might collapse onto itself—how instead of providing a model of a technologist rescuing journalism, we could become an object lesson in the dangers of journalism’s ever greater reliance on Silicon Valley.

When Chris first invited me for a chat one jacketless day in earliest spring, we wandered aimlessly across downtown Washington, paper coffee cups in hand. During those first weeks of his ownership, Chris had booked himself an endless listening tour. He seemed eager to speak with anyone who had worked at the magazine, or who might have a strong opinion about it. But as we talked, I wondered whether he wanted something more than my advice. I began to suspect that he wanted to rehire me as the New Republic’s editor. Before long he offered me the job, and I accepted.

In my experience, owners of the New Republic were older men who had already settled into their wealth and opinions. Chris was intriguingly different. He was 28, and his enthusiasm for learning made him seem even younger. During his honeymoon, he read War and Peace; the ottoman in his SoHo apartment was topped with seemingly every literary journal published in the English language. “When I first heard the New Republic was for sale,” he told me, “I went to the New York Public Library and began to read.” As he plowed through microfiche, the romance of the magazine’s history—and its storied writers, among them Rebecca West, Virginia Woolf, Edmund Wilson, Ralph Ellison, and James Wood—helped loosen his hold on his wallet.

Even after Facebook went public, leaving Chris with hundreds of millions of dollars in stock, he seemed indifferent to his wealth, or at least conflicted by it. He would get red-faced when people pointed out that he owned two estates and a spacious loft; he was apt to wear the same blazer every day. The source of his fortune didn’t define him—indeed, he always spoke of Facebook with an endearing detachment. He didn’t even use it that much, he once confessed to me at dinner. It was an admission that I found both disarming and hugely compelling. We soon began to remake the magazine, setting out to fulfill our own impossibly high expectations.

Over the past generation, journalism has been slowly swallowed. The ascendant media companies of our era don’t think of themselves as heirs to a great ink-stained tradition. Some like to compare themselves to technology firms. This redefinition isn’t just a bit of fashionable branding. As Silicon Valley has infiltrated the profession, journalism has come to unhealthily depend on the big tech companies, which now supply journalism with an enormous percentage of its audience—and, therefore, a big chunk of its revenue.

Dependence generates desperation—a mad, shameless chase to gain clicks through Facebook, a relentless effort to game Google’s algorithms. It leads media outlets to sign terrible deals that look like self-preserving necessities: granting Facebook the right to sell their advertising, or giving Google permission to publish articles directly on its fast-loading server. In the end, such arrangements simply allow Facebook and Google to hold these companies ever tighter.

What makes these deals so terrible is the capriciousness of the tech companies. Quickly moving in a radically different direction may be great for their bottom line, but it is detrimental to the media companies that rely on the platforms. Facebook will decide that its users prefer video to words, or ideologically pleasing propaganda to more-objective accounts of events—and so it will de-emphasize the written word or hard news in its users’ feeds. When it makes shifts like this, or when Google tweaks its algorithm, the web traffic flowing to a given media outlet may plummet, with rippling revenue ramifications. The problem isn’t just financial vulnerability, however. It’s also the way tech companies dictate the patterns of work; the way their influence can affect the ethos of an entire profession, lowering standards of quality and eroding ethical protections.

I never imagined that our magazine would go down that path. My first days working with Chris were exhilarating. As an outsider, he had no interest in blindly adhering to received wisdom. When we set out to rebuild the New Republic’s website, we talked ourselves into striking a reactionary stance. We would resist the impulse to chase traffic, to clutter our home page with an endless stream of clicky content. Our digital pages would prize beauty and finitude; they would brashly announce the import of our project—which he described as nothing less than the preservation of long-form journalism and cultural seriousness.

Chris said he believed that he could turn the New Republic into a profitable enterprise. But his rhetoric about profit never seemed entirely sincere. “I hate selling ads,” he would tell me over and over. “It makes me feel seedy.” And for more than a year, he was willing to spend with abandon. With the benefit of hindsight, I might have been more disciplined about the checks we, I mean he, wrote. But he had a weakness for leasing offices in prime locations and hiring top-shelf consultants. I had a weakness for handsomely paying writers to travel the globe. I moved quickly to hire a large staff, which included experienced writers and editors, who didn’t come cheap. Chris didn’t seem to mind. “I’ve never been so happy or fulfilled,” he would tell me. “I’m working with friends.”

Eventually, though, the numbers caught up with Chris. Money needed to come from somewhere—and that somewhere was the web. A dramatic increase in traffic would bring needed revenue. And so we found ourselves suddenly reliving recent media history, but in a time-compressed sequence that collapsed a decade of painful transition into a few tense months.

At the beginning of this century, journalism was in extremis. Recessions, coupled with readers’ changing habits, prodded media companies to gamble on a digital future unencumbered by the clunky apparatus of publishing on paper. Over a decade, the number of newspaper employees dropped by 38 percent. As journalism shriveled, its prestige plummeted. One report ranked newspaper reporter as the worst job in America. The profession found itself forced to reconsider its very reasons for existing. All the old nostrums about independence suddenly seemed like unaffordable luxuries.

Growing traffic required a new mentality. Unlike television, print journalism had previously shunned the strategic pursuit of audience as a dirty, somewhat corrupting enterprise. The New Republic held an extreme version of this belief. An invention of Progressive-era intellectuals, the magazine had, over the decades, became something close to a cult, catering to a loyal group that wanted to read insider writing about politics and highbrow meditations on culture. For stretches of its long history, however, this readership couldn’t fill the University of Mississippi’s football stadium.

A larger readership was clearly within reach. The rest of journalism was already absorbing this lesson, which Jonah Peretti, the founder of BuzzFeed, had put this way: R = ßz. (In epidemiology, ß represents the probability of transmission; z is the number of people exposed to a contagious individual.) The equation supposedly illustrates how a piece of content could go viral. But although Peretti got the idea for his formula from epidemiology, the emerging science of traffic was really a branch of behavioral science: People clicked so quickly, they didn’t always fully understand why. These decisions were made in a semiconscious state, influenced by cognitive biases. Enticing a reader entailed a little manipulation, a little hidden persuasion.

Chris not only felt urgency about the necessity of traffic, he knew the tricks to make it grow. He was a fixture at panels on digital media, and he had learned about virality from Upworthy, a site he had supplied with money to help launch. Upworthy plucked videos and graphics from across the web, usually obscure stuff, then methodically injected elements that made them go viral. As psychologists know, humans are comfortable with ignorance, but they hate feeling deprived of information. Upworthy used this insight to pioneer a style of headline that explicitly teased readers, withholding just enough information to titillate them into reading further. For every item posted, Upworthy would write 25 different headlines, test all of them, and determine the most clickable of the bunch. Based on these results, it uncovered syntactical patterns that almost ensured hits. Classic examples: “9 out of 10 Americans Are Completely Wrong About This Mind-Blowing Fact” and “You Won’t Believe What Happened Next.” These formulas became commonplace on the web, until readers grew wise to them.

The core insight of Upworthy, BuzzFeed, Vox Media, and other emerging internet behemoths was that editorial success could be engineered, if you listened to the data. This insight was embraced across the industry and wormed its way into the New Republic. Chris installed a data guru on our staff to increase our odds of producing viral hits. The guru kept a careful eye on Facebook’s trending topics and on what the public had craved at the same time the year before. “Super Bowl ads are big,” he told the staff at one of our weekly meetings. “What can we create to hit that moment?” Questions like these were usually greeted by hostile silence.

While I didn’t care for the tactics, I didn’t strenuously resist them either. Chris still encouraged us to publish long essays and deeply reported pieces. What’s more, he asked a perfectly reasonable question: Did we really think we were better than sober places like Time or The Washington Post? Clicks would rain down upon us if only we could get over ourselves and write about the same outrage as everyone else. Everyone else was doing this because it worked. We needed things to work.

One of the emblems of the new era in journalism haunted my life at the New Republic. Every time I sat down to work, I surreptitiously peeked at it—as I did when I woke up in the morning, and a few minutes later when I brushed my teeth, and again later in the day as I stood at the urinal. Sometimes, I would just stare at its gyrations, neglecting the article I was editing or ignoring the person seated across from me.

My master was Chartbeat, a site that provides writers, editors, and their bosses with a real-time accounting of web traffic, showing the flickering readership of each and every article. Chartbeat and its competitors have taken hold at virtually every magazine, newspaper, and blog. With these meters, no piece has sufficient traffic—it can always be improved with a better headline, a better approach to social media, a better subject, a better argument. Like a manager standing over the assembly line with a stopwatch, Chartbeat and its ilk now hover over the newsroom.

This is a dangerous turn. Journalism may never have been as public-spirited an enterprise as editors and writers liked to think it was. Yet the myth mattered. It pushed journalism to challenge power; it made journalists loath to bend to the whims of their audience; it provided a crucial sense of detachment. The new generation of media giants has no patience for the old ethos of detachment. It’s not that these companies don’t have aspirations toward journalistic greatness. BuzzFeed, Vice, and the Huffington Post invest in excellent reporting and employ first-rate journalists—and they have produced some of the most memorable pieces of investigative journalism in this century. But the pursuit of audience is their central mission. They have allowed the endless feedback loop of the web to shape their editorial sensibility, to determine their editorial investments.

Illustration: two journalists look at Chartbeat stats.
James Gilleard

Once a story grabs attention, the media write about the topic with repetitive fury, milking the subject for clicks until the public loses interest. A memorable yet utterly forgettable example: A story about a Minnesota hunter killing a lion named Cecil generated some 3.2 million stories. Virtually every news organization—even The New York Times and The New Yorker—attempted to scrape some traffic from Cecil. This required finding a novel angle, or a just novel enough angle. Vox: “Eating Chicken Is Morally Worse Than Killing Cecil the Lion.” BuzzFeed: “A Psychic Says She Spoke With Cecil the Lion After His Death.” TheAtlantic.com: “From Cecil the Lion to Climate Change: A Perfect Storm of Outrage One-upmanship.”

In some ways, this is just a digitally enhanced version of an old-fashioned media pile-on. But social media amplify the financial incentive to join the herd. The results are highly derivative. Joshua Topolsky, a founder of The Verge, has bemoaned this creeping homogenization: “Everything looks the same, reads the same, and seems to be competing for the same eyeballs.”

Donald Trump is the culmination of the era. He understood how, more than at any other moment in recent history, the media need to give the public the circus that it desires. Even if the media disdained Trump’s outrages, they built him up as a plausible candidate, at which point they had no choice but to cover him. Stories about Trump yielded the sort of traffic that pleased the data gods and benefited the bottom line. Trump began as Cecil the lion and ended up president of the United States.

Chris and I once sat at the breakfast table of an august Washington hotel, pondering the core qualities of the New Republic—the New Republic that we would re-create together. We didn’t say so explicitly, but we were searching for a piece of common ground, an adjective that could unite us. If there had been a whiteboard—and Chris loved whiteboards—it would have been filled with discarded terms. “We’re idealistic,” he said finally. “It ties together our storied past and our optimism about solutions.” Idealism was a word that melted my heart, and I felt uncontainable joy at the prospect of agreement. “Boom. That’s it.”

We were idealistic about our shared idealism. But my vision of the world was moralistic and romantic; his was essentially technocratic. He had faith in systems—rules, efficiencies, organizational charts, productivity tools. Around the second anniversary of Chris’s ownership, he shared a revised vision of the magazine’s future with me. As the months had slipped by, he had gotten antsy. Results, by which he meant greater web traffic and greater revenue, needed to come faster. “To save the magazine, we need to change the magazine,” he said. Engineers and marketers were going to begin playing a central role in the editorial process. They would give our journalism the cool, innovative features that would help it stand out in the marketplace. Of course, this required money, and that money would come from the budget that funded long-form journalism. We were now a technology company, he told me. (Hughes denies saying this.) To which I responded, “That doesn’t sound like the type of company that I’m qualified to run.” He assured me that I was.

Two months later, I learned from a colleague that Chris had hired my replacement—and that my replacement was lunching around New York, offering jobs at the New Republic. Before Chris had the chance to fire me, I resigned, and most members of the magazine’s editorial staff quit too. Their idealism dictated that they resist his idealism. They didn’t want to work for a publication whose ethos more clearly aligned with Silicon Valley than with journalism. They were willing to pay careful attention to Facebook, but they didn’t want their jobs defined by it. The bust-up received its fair share of attention and then the story faded—a bump on Silicon Valley’s route to engulfing journalism.

Data have turned journalism into a commodity, something to be marketed, tested, calibrated. Perhaps people in the media have always thought this way. But if that impulse existed, it was at least buffered. Journalism’s leaders were vigilant about separating the church of editorial from the secular concerns of business. We can now see the cause for fanaticism about building such a thick wall between the two.

Makers of magazines and newspapers used to think of their product as a coherent package—an issue, an edition, an institution. They did not see themselves as the publishers of dozens of discrete pieces to be trafficked each day on Facebook, Twitter, and Google. Thinking about bundling articles into something larger was intellectually liberating. Editors justified high-minded and quixotic articles as essential for “the mix.” If readers didn’t want a report on child poverty or a dispatch from South Sudan, they wouldn’t judge you for providing one. In fact, they might be flattered that you thought they would like to read such articles.

Journalism has performed so admirably in the aftermath of Trump’s victory that it has grown harder to see the profession’s underlying rot. Now each assignment is subjected to a cost-benefit analysis—will the article earn enough traffic to justify the investment? Sometimes the analysis is explicit and conscious, though in most cases it’s subconscious and embedded in euphemism. Either way, it’s this train of thought that leads editors to declare an idea “not worth the effort” or to worry about whether an article will “sink.” The audience for journalism may be larger than it was before, but the mind-set is smaller.


This essay is adapted from Franklin Foer’s forthcoming book, World Without Mind: The Existential Threat of Big Tech.

A Googler's Would-Be Manifesto Reveals Tech's Rotten Core
August 7th, 2017, 01:24 PM

An anonymous Google software engineer’s 10-page fulmination against workplace diversity was leaked from internal company communications systems, including an internal version of Google+, the company’s social network, and another service that Gizmodo, which published the full memo, called an “internal meme network.”

“I’m simply stating that the distribution of preferences and abilities of men and women differ in part due to biological causes,” the Googler writes, “and that these differences may explain why we don’t see equal representation of women in tech and leadership.”

The memo has drawn rage and dismay since its appearance Saturday, when it was first reported by Motherboard. It seemed to dash hopes that much progress has been made in unraveling the systemic conditions that produce and perpetuate inequity in the technology industry. That includes increasing the distribution of women and minorities in technical jobs, equalizing pay, breaking the glass ceiling, and improving the quality of life in workplaces that sometimes resemble frat houses more than businesses.

These reactions to the screed are sound, but they risk missing a larger problem: The kind of computing systems that get made and used by people outside the industry, and with serious consequences, are a direct byproduct of the gross machismo of computing writ large. More women and minorities are needed in computing because the world would be better for their contributions—and because it might be much worse without them.

* * *

Workplace equity has become a more visible issue in general, but it has reached fever pitch in the technology sector, especially with respect to women. When the former Uber engineer Susan Fowler published an explosive accusation of sexism at that company earlier this year, people took notice. When combined with a series of other scandals, not to mention with Uber’s longstanding, dubious behavior toward drivers and municipalities, the company was forced to act. CEO Travis Kalanick was ousted (although he remains on the board, where he retains substantial control).

Given the context, it’s reasonable to sneer at the anonymous Googler’s simple grievances against workplace diversity. Supposedly natural differences between men and women make them suited for different kinds of work, he argues. Failure to accept this condition casts the result as inequality, he contends, and then as oppression. Seeking to correct for it amounts to discrimination. Rejecting these premises constitutes bias, or stymies open discourse. The Googler does not reject the idea of increasing diversity in some way. However, he laments what he considers discriminatory practices instituted to accomplish those goals, among them hiring methods designed to increase the diversity of candidate pools and training or mentoring efforts meant to better support underrepresented groups.

Efforts like these are necessary in the first place because diversity is so bad in the technology industry to begin with. Google publishes a diversity report, which reveals that the company’s workforce is currently composed of 31 percent women, with 20 percent working in technical fields. Those numbers are roughly on par with the tech sector as a whole, where about a quarter of workers are women.

Racial and ethnic diversity are even worse—and so invisible that they barely register as a problem for the anonymous Googler. I was chatting about the memo with my Georgia Tech colleague Charles Isbell, who is the executive associate dean of the College of Computing and the only black tenure-track faculty member among more than 80 in this top 10–ranking program.

“Nothing about why black and Hispanic men aren’t software engineers?” he asked me after reading the letter, paraphrasing another black computer scientist, Duke’s Jeffrey R.N. Forbes. “Did I glaze over that bit?” Isbell knows that Google’s meager distribution of women far outshines its terrible racial diversity. Only 2 percent of all U.S. Googlers are black, and only 4 percent are Hispanic. In tech-oriented positions, the numbers fall to 1 percent and 3 percent, respectively. (Unlike the gender data, which is global, the ethnic diversity data is for the United States only.)

These figures track computing talent more broadly, even at the highest levels. According to data from the Integrated Postsecondary Education Data System, for example, less than 3 percent of the doctoral graduates from the top-10 ranked computer science programs came from African American, Hispanic, Native American, and Pacific Islander communities during the decade ending in 2015.

Given these abysmal figures, the idea that diversity at Google (or most other tech firms) is even modestly encroaching on computing’s incumbents is laughable. To object to Google’s diversity efforts is to ignore that they are already feeble to begin with.

* * *

The Googler’s complaints assume that all is well in the world of computing technology, such that any efforts to introduce different voices into the field only risk undermining its incontrovertible success and effectiveness. But is the world that companies like Google have brought about really one worthy of blind praise, such that anyone should be tempted to believe that the status quo is worth maintaining, let alone celebrating?

Many things are easier and even better thanks to Google search (or maps, or docs)—or Facebook, or smartphones, or any of the other wares technology companies put on offer. But overall, the contemporary, technologized world is also in many ways a hellscape whose repetitive delights have blinded the public to its ills.

Products have been transformed into services given away “free” as an excuse to extract data from users. That data is woven into an invisible lattice of coercion and control—not to mention as a source of enormous profit when sold to advertisers or other interested parties. Apps and websites are designed for maximum compulsion, because more attention means more content, and more content means more data and thereby more value. All that data is kept forever on servers corporations control, and which are engineered—if that’s the right word for it—in a way that makes them susceptible to attack and theft.

Thanks to the global accessibility of the internet, these services strive for universal deployment. Google and Facebook have billions of “customers” who are also the source of their actual products: the data they resell or broker. The leverage of scale also demands that everyone use the same service, which dumps millions together in unholy community. Online abuse is one consequence, as are the campaigns of misdirection and “fake news” that have become the front for a new cold war.

Because of that universal leverage, work of all kinds has also been upset by or consolidated in computing services. Retail, travel, entertainment, and transportation, of course, but even professions like real estate, law, and education appear to be at risk of dismantlement via automation and global dissemination. This sea change is excused by platitudes about “innovation” and “disruption.”

All told, the business of computing is infiltrated with a fantasy of global power and wealth that naturally coheres to the entrenched power of men over generations. To mistake such good fortune for inborn ability is to ignore the existence of history.

Men—mostly white, but sometimes Asian—have so dominated technology that it’s difficult even to ponder turning the tables. If you rolled back the clock and computing were as black as hip-hop, if it had been built from the ground up by African American culture, what would it feel like to live in that alternate future—in today’s alternate present? Now run the same thought experiment for a computing forged by a group that represents the general population, brown of average color, even of sex, and multitudinous of gender identity.

Something tells me the outcome wouldn’t be Google and Twitter and Uber and Facebook. It’s depressing that it takes a determined exercise in speculative fiction even to ponder how things might be different were its works made by different hands.

Not just the services or apps, either. Given that the fundamentals of computing arose from a long legacy of ideas mostly forged by white men, it’s hard to imagine how the fundamental operation of computers at the lowest level might have been different had ideas from alternative sources underwritten it.

The business of computing is also bound to incumbents. Failing to acknowledge this truth hamstrings earnest efforts to overcome that power through diversification. For example, advocating for more women entrepreneurs (about 17 percent of start-ups have a woman founder) or venture-capital partners (about 7 percent are women) seems like a terrific path toward diversity and equity. But the venture-backed start-up itself is still a slave to the marketplace design that its mostly male precursors had already created and entrenched. Change in established companies faces the same challenges. A search for a new Uber chief executive is underway, although it remains unclear whether Uber’s culture can be changed, even with a new leader.

Even the fateful Googler’s memo enjoys the spoils of a world already designed for male supremacy. What is this letter, after all, but a displaced Reddit post? Certain but non-evidential. Feigning structure, but meandering. Long and tedious, with inept prose and dead manner. This false confidence underwrites all the claims the memo contains, from its facile defense of jingoism as political conservatism to its easy dismissal of anyone not predetermined to be of use.

And Google built an “internal meme network” expressly for the purpose of sharing material like the memo in question! How to interpret such a thing except as Google’s own private Reddit, where the bravado of the white man’s internet comes home to roost at the office? Even worse, in her statement responding to the anti-diversity memo, Google’s vice president of diversity, integrity, and governance, Danielle Brown, appears to celebrate this offering as one among “so many platforms for employees to express themselves,” such that “this conversation doesn’t end with my email today.” The problem, it seems, is also its own solution.

As my colleague Mark Guzdial puts it, women used to avoid computer science because they didn’t know what it is. Now they avoid it because they know exactly what it is.

* * *

Soon, the fall term will commence at Georgia Tech. I will take to the lectern in the introductory course to our bachelor of science degree in computational media. The program also hopes to make headway against the diversity struggle. Conceived after the dot-com crash and inaugurated in 2004, the degree draws half its courses, faculty, and management from computing and half from the liberal arts. The goal was to address the increased connection between computing, expression, and communication.

The results have been promising. Computational media has achieved consistently high gender equity, for example. As of spring 2017, computer science was composed of only 24 percent women, whereas women made up 52 percent of the computational media students. That might give it the greatest proportion of women among accredited computing undergraduate majors in the country. Ethnic diversity is also better: 11 percent of computational media students are black and 9 percent are Hispanic, compared with 6 and 5 percent, respectively, in CS.

But that apparent victory might be a Pyrrhic one. All the anxieties that plague the anonymous Googler also afflict programs like ours, which provide part of the funnel to tech companies like Google. As computing rose from the dot-com ashes in the mid-2000s, enrollments skyrocketed. But computational media remains small—a tenth the size of computer science, and shrinking in total number and percentage of overall computing students during the same years CS has been on the rise. As a part of that decline, it appears to be losing men to computer science in particular, and perhaps falsely inflating the program’s claims to gender equity in the process.

When it was designed, computational media hoped to attract students with an interest in areas that blend computing and creativity, among them film, the web, television, games, and so on. That move failed to anticipate the foundational grievance that courses through the Google memo: that of “dumbing down” computing with interlopers. Students, more anxious and more professionally oriented than ever, seek the surety of the computer science degree. Academic faculty and industrial managers, meanwhile, fear yielding to “CS Lite,” a derogatory name for compromising technical expertise.

We should have known that for some computational media inevitably would threaten to feminize computing, relegating technical creativity to service work at best, emotional labor at worst. And so, while Georgia Tech can lay claim to an impressively gender-equal accredited computing degree, it’s not clear that such an accomplishment does anything more than pay lip service to diversity, distracting attention from our ever-growing contribution to the perverted reality of a world run by the computer programmers we graduate into companies like Google.

Darkened under the shadow of this Google jeremiad, I’m not sure what to say to my students when I stand before them later this month. Computation ended up having a much more widespread and much more sinister impact on media writ large—not just traditional media forms like music and news, but also on media as a name for every interaction between people and the world, from socializing to health, education to transit. It’s not possible to rewind the clock on the past, nor to burn it all down and start anew. But training up more women and minorities to service technological power’s existing ends—founding start-ups, working at Google—only transfers the lip service from educational programs to tech companies. They process diversity into glossy reports that placate shareholders and the public, all the while putting on the same show with a slightly different cast of characters.

Reader, I want so desperately to leave you with an alternative. A better option, a new strategy. One that would anticipate and defang the inevitable maws crying, “Well, what’s your solution, then?” But facile answers spun off-the-cuff by white men in power—aren’t these the things that brought trouble in the first place?

Maybe there is an answer, then, after all: Just to shut up for a minute. To stop, and to listen, and even to step out of the way. Not entirely, and not forever. But long enough, at least, to imagine how some of the lost futures of pasts left unpursued might have made for different, actual presents—and that might yet fashion new futures. Only a coward would conclude that none of them are better than the one that’s become naturalized as inevitable.

The Three Paradoxes Disrupting American Politics
August 7th, 2017, 01:24 PM

In their “hot mic” moment last week, Senators Susan Collins and Jack Reed gave cold bipartisan voice to a deep fear: The president of the United States is stunningly unprepared for his job and just may be—to use a technical political science term I learned in graduate school—two cans short of a six pack. Between the Senate’s late-night “damn the torpedoes” voting frenzy to repeal something, anything, from Obamacare, and the president’s early morning tweets proclaiming his “complete power” to pardon himself and his relatives, what used to be business as usual in Washington never looked so good.

It is comforting to think that Trump is the only thing standing between us and the good old dysfunctional ways of Washington. But I have my doubts. The president’s disruption engine is powered by three paradoxes. Each was made possible by technological innovations. All will endure long after this ringmaster moves his circus to another town.

Paradox #1: More information, less credibility

Trump’s cries about fake news get receptive audiences in part because we live in the most complex information age in human history. The volume of data is exploding, and yet credible information is harder to find. The scale of this information universe is staggering. In 2010, Eric Schmidt, the chairman of Google’s parent company Alphabet, noted that every two days, we create as much information as we did from the dawn of civilization up to 2003. Today Google processes close to 62,000 search queries a second. That’s more than 5.3 billion queries a day.

Information is everywhere, but good information is not. Why? Because the barriers to entry are so low. In the Middle Ages, when paper was a sign of wealth and books were locked up in monasteries, knowledge was considered valuable and creating it was costly. To be sure, there was some flat-earthy nonsense locked up in those tomes and religious and political rulers used their claims to knowledge as political weapons. Today the challenge is different. We now live at the opposite extreme, where anyone—from foreign adversaries to any crackpot with a conspiracy theory—can post original “research” online. And they do. Telling the difference between fact and fiction isn’t so easy. A few months ago, one of my graduate student researchers included information in a paper that I found oddly inaccurate, so I checked the footnotes. The source was “RT”—as in the outlet formerly known as Russia Today, a propaganda arm of the Kremlin. Stanford students aren’t the only ones struggling with real fake news. In December, the Pakistani defense minister rattled his nuclear saber in response to an Israeli tweet. Except the Israeli tweet wasn’t real.

Meanwhile, attitudes toward traditional information sources like the mainstream media and universities are souring, particularly among Republicans. Confidence in newspapers has declined by more than 20 points since 1977. Last month, a Pew survey found that for the first time, a majority of Republicans had a negative view of American universities.

The antidote to bad information used to be more information. Not anymore. What good is more information if people don't trust it—or if the traditional methods of sorting the good information from the bad (including the weighty brands of certain news organizations) don't work anymore? The marketplace of ideas is experiencing market failure. When information proliferates and credibility shrinks, reasoned argument suffers and democratic society decays.

Paradox #2: More connectivity, less civility

Today nearly half the world is online. By 2020 more people are expected to have cell phones than running water. But civility has not accelerated in tandem. In earlier times, it took some effort to deliver hurtful messages. In the U.K.’s Parliament building, seating in the House of Commons is designed to space the opposition at least two sword lengths apart from the ruling party—just in case.  Distance has its benefits. Years ago, I got a letter from a federal inmate claiming I was part of a 9/11 conspiracy and the murder for which he had been convicted. He went to a lot of trouble to write me with all that multi-colored ink. He even had to pay for the stamp. Now, I can get anonymous vitriol on Twitter, or in my email, or in the comments section of something I write—instantly, for free.

Sure, connectivity has created tremendous positive changes, including new markets in developing nations and new bonds among kindred spirits across vast distances. But connectivity has also made nasty discourse more convenient and socially acceptable. The step between harmful speech and chilled speech is a small one. Civility is not a convenience. It is a cornerstone of free speech in a liberal society. Trump may personify America’s descent into coarse discourse and amplify its spread. But it didn’t start and will not stop in Trump Tower or the White House. The root causes lie deeper.

Paradox #3: The wisdom of crowds, the duplicity of crowds

Technology has unleashed the wisdom of crowds. Now you can find an app harnessing the experiences and ratings of likeminded users for just about anything. The best taco truck in Los Angeles? Yelp. The highest rated puppy crate? Amazon. Youth hostels in Barcelona? TripAdvisor. Researchers are even using the wisdom of crowds to better predict which internet users may have pancreatic cancer and not even know it yet—based on the search histories of other cancer patients.

But the 2016 presidential election revealed that not all crowds are wise, or even real. The wisdom of crowds can be transformed into the duplicity of crowds. Deception is going viral.

On social media, one person can masquerade as hundreds, even thousands, with fake personas. Thanks to advances in artificial intelligence, it’s also possible to create armies of automated social media bots to develop, manipulate, and spread deceptive information at speeds and scales unimaginable before now. Facebook is so concerned about the duplicity of crowds, in April the company issued a “call to arms” report about what it’s doing to stop bad actors from manipulating public discourse and deceiving people.

Disruption used to be a good word, signifying creativity and innovation—shaking up things in a good way. The Founding Fathers were disruptive, imagining a nation ruled by laws and not kings. Their great American experiment inspired generations and helped transform half the world into democracies. NASA was disruptive, pushing the frontiers of science to land a man on the moon. Silicon Valley tech companies have disrupted all sorts of industries to become the engines of the global economy.

But disruption often has unintended consequences. More information, connectivity, and crowdsourcing are also shrinking credibility, eroding civility, and empowering the duplicity of crowds. These technological chickens are coming home to roost, and they’re likely to stay here even when Trump is gone.

Your Smartphone Reduces Your Brainpower, Even If It's Just Sitting There
August 3rd, 2017, 01:24 PM

I sit down at the table, move my napkin to my lap, and put my phone on the table face-down. I am at a restaurant, I am relaxed, and I am about to start lying to myself. I’m not going to check my phone, I tell myself. (My companion’s phone has appeared face-down on the table, too.) I’m just going to have this right here in case something comes up.

Of course, something will not come up. But over the course of the next 90 minutes I will check my phone for texts, likes, and New York Times push alerts at every pang of boredom, anxiety, relaxation, satiety, frustration, or weariness. I will check it in the bathroom and when I return from the bathroom. I don’t really enjoy this, but it is very interesting, even if some indignant and submerged part of my psyche moans that I am making myself dumber every time I look at it. As, in fact, I am.

A smartphone can tax its user’s cognition simply by sitting next to them on a table, or being anywhere in the same room with them, suggests a study published recently in the Journal of the Association for Consumer Research. It finds that a smartphone can demand its user’s attention even when the person isn’t using it or consciously thinking about it. Even if a phone’s out of sight in a bag, even if it’s set to silent, even if it’s powered off, its mere presence will reduce someone’s working memory and problem-solving skills.

These effects are strongest for people who depend on their smartphones, such as those who affirm a statement like, “I would have trouble getting through a normal day without my cell phone.”

But few people also know they’re paying this cognitive smartphone tax as it plays out. Few participants in the study reported feeling distracted by their phone during the exam, even if the data suggested their attention was not at full capacity.

“We have limited attentional resources, and we use some of them to point the rest of those resources in the right direction. Usually different things are important in different contexts, but some things—like your name—have a really privileged status,” says Adrian Ward, an author of the study and a psychologist who researches consumer decision-making at the University of Texas at Austin.

“This idea with smartphones is that it’s similarly relevant all of the time, and it gets this privileged attentional space. That’s not the default for other things,” Ward told me. “In a situation where you’re doing something other than, say, using your name, there’s a pretty good chance that whatever your phone represents is more likely to be relevant to you than whatever else is going on.”

In other words: If you grow dependent on your smartphone, it becomes a magical device that silently shouts your name at your brain at all times. (Now remember that this magical shouting device is the most popular consumer product ever made. In the developed world, almost everyone owns one of these magical shouting devices and carries it around with them everywhere.)

In the study, Ward and his colleagues examined the performance of more than 500 undergraduates on two different common psychological tests of memory and attention. In the first experiment, some participants were told to set their phones to silent without vibration and either leave them in their bag or put them on their desk. Other participants were asked to leave all their possessions, including their cell phone, outside the testing room.

In the second experiment, students were asked to leave their phones on their desk, in their bag, or out in the hall, just as in the first experiment. But some students were also asked to power their phone off, regardless of location.

In both experiments, students who left their phones outside the room seemed to do best on the test. They also found the trials easier—though, in follow-up interviews, they did not attribute this to their smartphone’s absence or presence. Throughout the study, in fact, respondents rarely attributed their success or failure on a certain test to their smartphone, and they almost never reported thinking they were underperforming on the tests.

Daniel Oppenheimer, a professor of psychology at the University of California, Los Angeles, noted that this effect is well-documented for enticing objects that aren’t smartphones. He was not connected to this research, though his research has focused on other vagaries of digital life. Several years ago, he and his colleagues suggested that students remember far more of a lecture when they take notes by hand rather than with a laptop.

“Attractive objects draw attention, and it takes mental energy to keep your attention focused when a desirable distractor is nearby,” Oppenheimer told me in an email. “Put a chocolate cake on the table next to a dieter, a pack of cigarettes on the table next to a smoker, or a supermodel in a room with pretty much anybody, and we would expect them to have a bit more trouble on whatever they’re supposed to be doing.”

He continued: “We know that cell phones are highly desirable, and that lots of people are addicted to their phones, so in that sense it’s not so surprising that having one visible nearby would be a drain on mental resources. But this study is the first to actually demonstrate the effect, and given the prevalence of phones in modern society, that has important implications,” he said.

Ward will continue researching the psychological costs and benefits of the new technologies that have permeated everyday life. His dissertation at Harvard looked at the implications of delegating cognitive tasks to the cloud. “Big things are happening so quickly. It’s the 10th anniversary of the iPhone, and the internet’s only been around for 25 years, yet already we can’t imagine our lives without these technologies,” he said. “The joyful aspects, or positive aspects—or the addictive aspects—are so powerful, and we don’t really know the negative aspects yet.”

“We can yell our opinions at each other, and people are going to agree or disagree with them, and set up luddites-versus-technolovers debates. But I wanted to get data,” he told me.

It’s worth noting that the type of psychological research Ward conducts—trials on willing, Western undergrads, often participating in studies to fulfill course credit—has suffered a crisis of confidence in recent years. Psychologists have had difficulty replicating some of the most famous experiments in their field, leading some to argue that all psychology experiments should be replicated before they are published. This study has not yet been replicated.

One possible consequence of Ward’s work extends beyond smartphones. Most office workers now know that “multi-tasking” is a fallacy. The brain isn’t doing two tasks at once as much as it’s making constant, costly switches between tasks. But Ward says that assiduously not multi-tasking isn’t very helpful, either.

“When you’re succeeding at not multitasking—that is, when you’re doing a ‘good job’—that’s not exactly positive as well,” he said. That’s because it takes mental work, and uses up attentional resources, to avoid distraction.

Instead, he recommends that the most dependent users just put their smartphone in another room.


Related Video

The Hair Dryer, Freedom’s Appliance
August 1st, 2017, 01:24 PM

I used to scandalize my friends with this confession: “I don’t own a hair dryer.”

It was as if I’d told them I ride a horse to work. But their surprise was justified: 90 percent of U.S. homes own a hair dryer. They come standard in most hotel rooms. The hair dryer is tangled up with the history of fashion, the evolution of women’s roles, and the development of gendered social spaces.

In the beginning, the hair dryer wasn’t a home appliance. In 1888, Alexandre-Ferdinand Godefroy debuted his “hair dressing device” in a French salon. It wasn’t pretty: His dryer was a clumsy, seated machine, resembling a vacuum cleaner—essentially a giant hose connected to a heat source. At the time, women wore their hair long and looped, or curled into elaborate updos. For formal occasions, they might have ribbons, feathers, or flowers woven into their locks. Godefroy’s invention aimed to speed up the labor involved with these creations. But his machine failed to circulate air effectively, so the time saved wasn’t significant. The prototype was far too unwieldy to become widespread anyway.

Hair dryers didn’t take off until the first handheld units became available, in the early 1920s. These metal, gun-shaped models arrived right when women’s hairstyles were shifting from the mountainous piles of Gibson Girl curls that required dozens of bobby pins to the tidy, easier-to-shape bobs of flappers. It was a radical break from past styles. As Rachel Maines, a technology historian at Cornell University, explained to The New York Times, “Having clean, shiny, fluffy hair—that’s a 20th-century thing.” This new trend was also happy news for the hair dryer. Dirty hair could hide in a pompadour, but a shorter ’do that hung free would reveal limp or stringy hair.

Early handheld hair dryers were still difficult to use. Their metallic (often aluminum) casings made them hard to wield. Also, drying times were far longer than today’s norm, as the devices drew only 100 watts of electricity compared to the 2,000 watts of modern versions. That made them exhausting to use over the long periods of time required for drying. Some early versions had pedestals to give tired arms a rest. Nevertheless, these dryers were considered a marvel of convenience, marketed as having “loads of hot or cold air instantly. Just by pressing the handle button.”

The handheld versions for the home were joined by hooded models for the salon. Made of metal and later of plastic, and applying an even, all-over heat, hooded dryers entered widespread use in 1930s. In the decades that followed, they became a defining trait of the salon scene.

This was an unsettled time for American women. First they joined the workforce during the war effort, in the 1940s. Later, they were driven back into the home. During these postwar years, the salon became a cherished second space for women outside the home. The task of “setting” hair into the molded hairdos popular in the day, such as Veronica Lake’s cascading S-shaped waves or Grace Kelly’s sculpted bob, required regular appointments at the salon, establishing it as a popular weekly meeting spot. The image of a row of women idly flipping through magazines under a hair dryer hood became a symbol of postwar prosperity and of women’s new leisure time.

In an effort to bring that salon cachet into the home, the bonnet hair dryer debuted in 1951. This model had a soft, shower cap-style headpiece that the user would attach to a motor via a hose. In a Sunbeam commercial from the 1960s, the bonnet dryer was advertised to be “so fast that it actually dries hair in an average of 22 minutes.” These models were also meant to mimic the salon experience: “Just select any one of four temperatures. Then, relax,” the commercial suggested. They came in little handled carrying cases that could be toted around, but typically the user would stay seated in a single spot while hot air circulated. Advertisements frequently showed models chatting on the phone, suggesting that salon-level socializing and the community it inspired wouldn’t be lost if women did their own grooming at home.

Another invention that sprang from hooded hair dryers was the “wave machine.” The hairstylist Marjorie Joyner, known for her talent in creating marcel waves, connected pot-roast rods to a dryer, and mechanized marcelling was born. Hair salons were racially segregated in these years, but the wave device became popular in both black and white salons alike. With this machine, Joyner appears to have become the first African-American woman to secure a patent.

* * *

In the 1960s and ’70s, the sexual revolution left its mark on fashion—and hair. The rigid gender divisions of the previous decades began to soften. Icons like the Beatles and the Monkees were wearing their hair longer in mod “mop tops,” and influencing other men to do the same. That helped spur the counterculture trend of long, hippie locks. Companies moved quickly to capitalize on this potential new hair-dryer market. As one Clairol ad said to its male reader: “Congratulations. You have more hair today than a year ago.” But then it explained to men that the “secret” to mastering this new look “isn’t just more hair. It’s cleaner hair, blown dry—to give it bulk and body it out.”

Hairstylists gained celebrity status in these decades, thanks to stylist-to-the-stars Vidal Sassoon and films like Shampoo, which starred Warren Beatty as a hunky hairdresser irresistible to his customers. Suddenly, a functional grooming tool had sex appeal. The stylist-as-Casanova persona can still be found today, in celebrity stylists like Harry Josh. Miranda Kerr promoted his signature dryer by blowing it across her décolletage during a photo shoot, treating it more like a seductive bottle of perfume than an appliance.

During the ’60s, plastics began to dominate consumer goods, and hair dryers were no exception. Once made from metal or occasionally Bakelite, now hair dryers joined a flood of “fantastic plastic” products facilitated by companies like DuPont and Dow Chemical. But apart from an alteration to its materials and the addition of various attachments and heat conductors, like ceramic and tourmaline, the hair dryer has changed very little since its birth. Writing for Fast Company in 2011, James Gaddy lamented the device’s boring uniformity, complaining that they “all look the same.” Gaddy denounced all models as little more than “a holding pen for the small motor-driven fan and heater inside.”

It wasn’t until the 1970s that regulations were drafted to improve dryer safety. And only as recently as 1991 were these devices legally required to contain ground fault circuit interrupters, which greatly decrease the danger of high-voltage injury or death. Older models still resurface in the news for plunking into bathtubs and electrocuting their owners, as in the case of the young Palomera sisters (ages 7 and 9), who were cooling off in the tub when their old dryer dropped in.

* * *

Hair dryers weave in and out of public and private spaces, making them different from other grooming tools. Hair depilators and eyelash curlers remain hidden behind closed doors. But hair dryers began in public and continue to occupy public space. Some salons will even place a chair in their picture window, putting the hair-drying experience on full display and marketing it to passersby.

In the last decade, hair dryers have taken up public real estate anew thanks to an explosion of hair-drying bars in urban areas that deal exclusively with the washing and drying of hair (no cuts or dyeing treatments). The company Drybar, one of the most popular, has more than 70 locations across the United States and Canada. Styles are modeled on the extremely coiffed looks paraded on the red carpets of award shows and on reality TV. These ultra-manicured hairdos are a status symbol akin to a handbag or diamond ring. Maintaining them requires a commitment of $40 to $50 per week on the impermanence of hairdos that one humid day can dismantle.

But the hair dryer may now be at a crossroads. In 2016, Dyson, the maker of vacuums, fans, and hand dryers, set out to remodeling the hair dryer. As it had done with its Airblade hand dryers, Dyson hopes to revolutionize the market, encouraging more women to take their hair back into their own hands. The company shifted the motor to the base of the dryer, making it smaller and supposedly improving drying time. Though many of Dyson’s changes are more aesthetic than functional, this is a market where looks matter.

At the same time, the fashion pendulum has begun to swing away from high-polish, TV-ready looks toward a more relaxed, no-effort appearance. Celebrities like Alicia Keys have embraced the no-makeup look, and the #iwokeuplikethis movement has reinvigorated a fresher, less preened appearance. Hair might become less conforming and more free and breezy again—which could push hot air out of the public eye and back behind the bathroom door.

I finally succumbed and bought a hair dryer. I had spent years flying out the door with a damp head of hair, but I decided my soggy morning appearance was doing me a disservice. It communicated a certain young, relaxed attitude that went against the professional adult I wanted to become. Years later, I still feel awed that after 10 minutes of fanning a dryer around, my hair can be tamed. Now I see why ads for hair dryers were once laced with a million exclamation points, and showed women who were smitten over their new grooming gadgetry. As one reads, you can store your hair dryer away “or you can keep it out in the open and make a pet out of it.”


This article appears courtesy of Object Lessons.

What Steve Bannon Wants to Do to Google
August 1st, 2017, 01:24 PM

Over the past year, the old idea of enforcing market competition has gained renewed life in American politics. The basic idea is that the structure of the modern market economy has failed: There are too few companies, most of them are too big, and they’re stifling competition. Its supporters argue that the government should do something about it, reviving what in the United States we call antitrust laws and what in Europe is called competition policy.

Stronger antitrust enforcement—it’s enough of a thing, now, that Vox is explaining it.

The loudest supporters of this idea, so far, have been from the left. But this week, a newer and more secretive voice endorsed a stronger antitrust policy.

Steve Bannon, the chief strategist to President Donald Trump, believes Facebook and Google should be regulated as public utilities, according to an anonymously sourced report in The Intercept. This means they would get treated less like a book publisher and more like a telephone company. The government would shorten their leash, treating them as privately owned firms that provide an important public service.

What’s going on here, and why is Bannon speaking up?

First, the idea itself: If implemented, it’s unclear exactly how this regime would change how Facebook and Google run their business. Both would likely have to be more generous and permissive with user data. If Facebook is really a social utility, as Mark Zuckerberg has said it is, then maybe it should allow users to export their friend networks and import them on another service.

Both companies would also likely have to change how they sell advertising online. Right now, Facebook and Google capture half of all global ad spending combined. They capture even more global ad growth, earning more than three quarters of every new dollar spent in the market. Except for a couple Chinese firms, which have a lock on their domestic market but little reach abroad, no other company controls more than 3 percent of worldwide ad spending.

So if the idea were implemented, it would be interesting, to say the least—but it’s not going to become law. The plan is a prototypical alleged Bannonism: iconoclastic, anti-establishment, and unlikely to result in meaningful policy change. It follows another odd alleged Bannon policy proposal, leaked last week: He reportedly wants all income above $5 million to be taxed at a 44-percent rate.

Which bring me to the second point: Bannon’s proposal is disconnected from the White House policy that he is, at least on paper, officially helping to strategize. The current chairman of the Federal Communications Commission, Ajit Pai, is working to undo the rule that broadband internet is a public utility (which itself guarantees the idea of “net neutrality”). Trump named Pai chairman of the FCC in January.

Bannon’s endorsement of stronger antitrust enforcement (not to mention a higher top marginal tax rate) could very well be the advisor trying to signal that he is still different from Trump. Bannon came in as the avatar of Trump’s pro-worker, anti-immigration populism; he represented the Trump that tweeted things like:

As the president endorses Medicaid cuts and drifts closer to a Paul Ryan-inflected fiscal conservatism, Bannon may be looking for a way to preserve his authenticity.

Third, it’s the first time I’ve seen support for stronger antitrust enforcement from the right. So far, the idea’s strongest supporters have been Congressional Democrats. Chuck Schumer has elevated the idea to the center of the “Better Deal” policy agenda in 2018. Before that, its biggest supporters included Bernie Sanders, who railed against “Too Big to Fail” banks in his presidential campaign; and Elizabeth Warren, who endorsed a stronger competition policy across the economy last year.

Finally, while antitrust enforcement has been a niche issue, its supporters have managed to put many different policies under the same tent. Eventually they may have to make choices: Does Congress want a competition ombudsman, as exists in the European Union? Should antitrust law be used to spread the wealth around regional economies, as it was during the middle 20th century? Should antitrust enforcement target all concentrated corporate power or just the most dysfunctional sectors, like the pharmaceutical industry?

And should antitrust law seek to treat the biggest technology firms—like Google, Facebook, and perhaps also Amazon—like powerful but interchangeable firms, or like the old telegraph and telephone companies?

There will never be one single answer to these questions. But as support grows for competition policy across the political spectrum, they’ll have to be answered. Americans will have to examine the most fraught tensions in our mixed system, as we weigh the balance of local power and national power, the deliberate benefits of central planning with the mindless wisdom of the free market, and the many conflicting meanings of freedom.

Trump Tests the F-Bomb Policy at The New York Times
July 31st, 2017, 01:24 PM

The New York Times likes to think of itself as a family newspaper. It is also the self-described paper of record. It may not be either, but it’s definitely not both all the time.

Take, for example, the moment when the Times had to choose whether to quote the new White House communications director in a particularly colorful tirade against his colleagues. Anthony Scaramucci, who joined the Trump administration last week, eviscerated the White House chief of staff, Reince Priebus, and the administration’s chief strategist, Stephen Bannon, in an interview with a New Yorker reporter on Wednesday.

“Reince is a fucking paranoid schizophrenic, a paranoiac,” Scaramucci said.

And also: “I’m not Steve Bannon. I’m not trying to suck my own cock.”

Then, for good measure: “I’m not trying to build my own brand off the fucking strength of the president. I’m here to serve the country.”

In this case, the Times really went for it, publishing all three quotes verbatim. Maybe not every journalist would make the same call, but most would understand why the Times went this route. Many publications try to avoid gratuitous foul language, even in quotes, unless the meaning of the thing being conveyed depends on it. Otherwise, matters of taste notwithstanding, bad language is often just distracting. Plenty of people curse in casual conversation; rarely is it actually meaningful.

But when the White House director of communications uses language like, well, you know, to describe the president’s inner circle, it’s in the public interest to know exactly what was said. (The Atlantic quoted Scaramucci, too, by the way.) The Times didn’t immediately grant my request to speak with an editor Thursday night, but a spokesperson did direct me to comments by the paper’s deputy managing editor, which he’d published to Twitter.

The Times published Scaramucci’s profanity only after top editors, including the executive editor Dean Baquet, “discussed whether it was proper,” Clifford Levy wrote. “We concluded that it was newsworthy that a top Trump aide used such language. And we didn’t want our readers to have to search elsewhere to find out what Scaramucci said.” Given what the newspaper has had to navigate before, it’s likely the vulgar reference to Bannon was the most difficult call among the three. Indeed, wrote one of the top editors at the Times, Sam Dolnick, the debate was “one for the ages.”

“A couple of years ago I got in trouble for ‘hand job.’ In a quote,” tweeted Emily Bazelon, a staff writer for The New York Times Magazine. In fact, Bazelon’s reference to hand jobs, at least the reference that appears in her 2014 magazine story about college romance, was not a direct quote but a line she paraphrased.

Either way, it’s not like the Times never prints vulgar language.

There was the Access Hollywood tape last fall, which featured Trump bragging about being able to grab women without their consent. The Times repeatedly printed the vulgar terms he used. It also published an offensive term—uh, rhymes with “blunt”—that a Trump adviser had used to describe Hillary Clinton, only to remove the word from an op-ed after the fact with a brief editor’s note flagging the change.

There have been other instances in which obscenities found their way into the Times. F-bombs are sprinkled throughout book excerpts, for example, and in web-only extras—quoting the poet Allen Ginsberg, in the case of a 2007 blog post. The word “fuck” also appeared in the full text of the Starr Report, which detailed President Bill Clinton’s sexual relationship with a 22-year-old White House intern, Monica Lewinsky, and which the Times printed in 1998. The report included, for instance, a quote from Lewinsky saying she wished the president would “acknowledge ... that he helped fuck up my life.” In a separate story that day, the paper described how the graphic language in the report was making things challenging for newscasters, in particular. “On CBS, Bob Schieffer looked profoundly embarrassed as he read cold from the report,” the Times wrote. Another Clinton-era curse word that made it into the paper? “Dumb-ass,” which Rolling Stone had mistakenly quoted Clinton as having said, a dispute that the Times covered.

The Starr Report, published by The New York Times in 1998, tested the paper’s language standards. (Screenshot from the New York Times)

It isn’t so easy to track the vulgarities the Times has printed, however, in part because it has used text-reading software to digitize much of its archival material. On one hand, this is why the newspaper’s archival presentation is so impressive. But it’s also why a search for any given curse word is liable to turn up a ton of false positives. To a computer, for instance, the 1975 headline “Court Shift on Sanity Debated” scans as “Court Shit on Sanity Debated.” Which is funny, sure, but not actually what the paper printed at the time. The Times online archive is full of this sort of thing.

A computer-generated headline from a 1951 New York Times story appears to contain an expletive, but the original version of the story did not. (Screenshot from The New York Times)
Here’s that 1951 story as it actually appeared in the Times. (NYT)

The newspaper’s reporters have not infrequently written themselves into contortions to avoid foul language. “Barnyard expletive” is a favorite cop-out. (Personally, I prefer “baloney” if you want to get cute about it, but maybe that’s just me.) Mostly, they end up describing unsavory words in vague terms like “a vulgarity that refers to part of the male anatomy” or “a vulgarism for a part of the female anatomy.” Countless Times articles about George Carlin, the comedian who was famous for his bit about the “seven words you can never say on television,” dutifully avoided printing them. (A blessing, perhaps, in the YouTube age, as they’re best delivered by Carlin himself.) “A Master of Words, Including Some You Can’t Use in a Headline,” one article’s headline cheekily acknowledged.

These days, the paper tends to find creative workarounds for foul language. In previous eras, however, they’ve avoided covering a story altogether on account of vulgarity. A 1901 story described a trial with testimony that was “of such a character” that the Times could not print it. (They may not have had a choice; the Times noted that the papers in London, where the trial was underway, had refused to publish the testimony first.)

In a 2007 story about a hardcore punk band, the writer Kelefa Sanneh laid out clearly what it would take for the Times to print the group’s colorful name. “Well, the name won’t be printed in these pages,” Sanneh wrote, “not unless an American president, or someone similar, says it by mistake.”

“I made a mistake in trusting in a reporter,” Scaramucci tweeted on Thursday night. “It won’t happen again.”

Why Zuckerberg and Musk Are Fighting About the Robot Future
July 31st, 2017, 01:24 PM

Elon Musk and Mark Zuckerberg are having a spat about whether or not artificial intelligence is going to kill us all.

Musk, the chief of Tesla and SpaceX who has longstanding worries about the potentially apocalyptic future of AI, recently returned to that soapbox, making an appeal for proactive regulations on AI. “I keep sounding the alarm bell,” he told attendees at a National Governors Association meeting this month. “But until people see robots going down the street killing people, they don’t know how to react.”

In a Facebook Live broadcast, Zuckerberg, Facebook’s CEO, offered riposte. He called Musk a “naysayer” and accused his doomsday fears of unnecessary negativity. “In some ways I actually think it is pretty irresponsible,” Zuck scolded. Musk then retorted on Twitter: “I’ve talked to Mark about this. His understanding of the subject is limited.”

Seeing the CEOs of publicly traded tech companies go at it like Tay and Kanye is unfamiliar territory. Open sneers between public figures is normally reserved for tabloid socialites or feuding celebrities. But this is 2017—the president attempts to enact policy via Twitter, after all—so expectations must be adjusted. Rappers and reality-television stars feud because their prosperity is directly yoked to their public image. That’s true for tech business leaders now, too. Musk and Zuckerberg aren’t engaged in a debate about ideas. They are peacocking their personal identities in order to serve their future interests.

* * *

I’ve argued before that “artificial intelligence” has become so overused that the term is almost meaningless. Like “algorithm” before it, technologists, businesspeople, and journalists wield the idea like a magic wand that turns ordinary computer software and devices into world-saving (or world-ending) marvels. And given AI’s long history of wonder and dread in science fiction, people are primed to expect it to usher in utopia or dystopia.

When a term has a wealth of possible meanings, it is easy to ascribe one’s favorite meaning to it. “Disruption” is like this, as is “fake news.” The term “climate change” is now used by the right and left alike for opposite purposes. The Republican talking-points pollster Frank Luntz advocated for it over “global warming” to the G.W. Bush administration, because it sounded less severe. Change can be good, the reasoning goes.

Artificial intelligence has left the orbit of computer science, and even science fiction, and become an abstract talking point. When people make use of it, especially powerful actors like Musk and Zuckerberg, it serves a perlocutionary function: as personal branding.

When it comes to personal brands, Musk’s is easier to characterize. He’s long been compared to Tony Stark, the fictional industrialist and alter ego of Iron Man in Marvel comics. After Musk sold his first company, an online publishing service called Zip2, to AltaVista for $307 million in 1999, he co-founded X.com, which was eventually renamed PayPal and sold to eBay for $1.5 billion in 2002. Musk’s PayPal partner Peter Thiel turned to venture investing with the spoils, but Musk decided to make space rockets instead and SpaceX was born. His subsequent ventures, including electric/autonomous car maker Tesla, solar-cell manufacturer SolarCity, the Hyperloop tube-transit concept, and the new, associated tunneling-equipment firm The Boring Company—all of these ventures represent infrastructural invention of the Tony Stark variety.

As the statistician Mark Palko recently noted, Musk has a material interest in maintaining the Tony Stark alter-ego persona. When Musk waxes futuristic on self-driving cars, underground transit, brain-embedded computers, or Mars colonies, he reinforces the current and future value of his various ventures.

Portraying AI as an existential threat to humanity is consistent with this interest. If intelligent machines might strip humanity of its unmatched leverage over the natural and artificial environment, then industrial solutions must be pursued in order to stop them. Even if the threat of a robot apocalypse is unlikely, Musk has reason to advocate for aggressive contingency plans.

It’s difficult to match Zuckerberg’s business persona to a specific comic-book hero (Peter Parker? Reed Richards?). But unlike Musk, Zuck’s business and personal interests reside at the level of ideas rather than materials. Facebook is his singular venture, an enormously successful company that deals entirely in digitized text, images, video, and sound. These are representations—ideas and concepts—rather than concrete goods.

When Zuckerberg has looked beyond these immaterial representations, he has always done so in order to corner the market on more opportunities for symbol-creation and dissemination. Facebook’s purchase of Instagram and WhatsApp offer examples. And Zuckerberg’s big hardware acquisition, the VR-headset maker Oculus, represents a new terrain for virtual experience, not a new means of taming cities, continents, or the cosmos.

From this vantage point, software is always friendly and tame—or at least domesticable. Zuckerberg has billions of users and millions of advertisers who want to reach them, and terror about the future of computers only alienates those ordinary people from the friendly future he hopes to deliver to them. Zuckerberg learned this lesson the hard way, when his demonstration of a home-grown AI for his house, which he named Jarvis, was met with sneers and mockery. His recent tour of ordinary people and places in the United States shows just how completely he learned this lesson. The man is newly serious about reinforcing computing as a friendly, or at least innocuous, force on everyday life.

Especially given Facebook’s undeniable impact on the 2016 election—a feat that is hardly benign, but which Zuckerberg seems to have defused expertly anyway. This is why Zuckerberg is an “optimist,” as he puts it, when it comes to artificial intelligence. To say otherwise would suggest that computers are intrinsically risky. That fear, even if hypothetical, has potentially dire consequences for Zuckerberg’s business and personal future.

* * *

When figures like Musk and Zuckerberg talk about artificial intelligence, they aren’t really talking about AI—not as in the software and hardware and robots that might produce delight or horror when implemented. Instead they are talking about words, and ideas. They are framing their individual and corporate hopes, dreams, and strategies. And given Musk and Zuck’s personal connection to the companies they run, and thereby those companies’ fates, they use that reasoning to help lay the groundwork for future support among investors, policymakers, and the general public.

On this front, it’s hard not to root for Musk’s materialism. In an age when almost everything has become intangible, delivered as electrons and consumed via flat screens, launching rockets and digging tunnels and colonizing planets and harnessing the energy of the sun feel like welcome relief. But the fact that AI itself is an idea more than it is a set of apparatuses suggests that Zuckerberg might have the upper hand. Even if it might eventually become necessary to bend the physical world to make human life continuously viable, the belief in that value starts as a concept, not a machine.

The Algorithm That Makes Preschoolers Obsessed With YouTube
July 27th, 2017, 01:24 PM

Toddlers crave power. Too bad for them, they have none. Hence the tantrums and absurd demands. (No, I want this banana, not that one, which looks identical in every way but which you just started peeling and is therefore worthless to me now.)

They just want to be in charge! This desire for autonomy clarifies so much about the behavior of a very small human. It also begins to explain the popularity of YouTube among toddlers and preschoolers, several developmental psychologists told me.

If you don’t have a 3-year-old in your life, you may not be aware of YouTube Kids, an app that’s essentially a stripped-down version of the original video blogging site, with videos filtered by the target audience’s age. And because the mobile app is designed for use on a phone or tablet, kids can tap their way across a digital ecosystem populated by countless videos—all conceived with them in mind.

The videos that surface on the app are generated by YouTube’s recommendation algorithm, which takes into account a user’s search history, viewing history, and other data.* The algorithm is basically a funnel through which every YouTube video is poured—with only a few making it onto a person’s screen.

This recommendation engine poses a difficult task, simply because of the scale of the platform. “YouTube recommendations are responsible for helping more than a billion users discover personalized content from an ever-growing corpus of videos,” researchers at Google, which owns YouTube, wrote in a 2016 paper about the algorithm. That includes many hours of video uploaded to the site every second of every day. Making a recommendation system that’s worthwhile is “extremely challenging,” they wrote, because the algorithm has to continuously sift through a mind-boggling trove of content and instantly identify the freshest and most relevant videos—all while knowing how to ignore the noise.

The architecture of YouTube’s recommendation system, in which “candidate videos” are retrieved and ranked before presenting only a few to the user. (Google / YouTube)

And here’s where the ouroboros factor comes in: Kids watch the same kinds of videos over and over. Videomakers take notice of what’s most popular, then mimic it, hoping that kids will click on their stuff. When they do, YouTube’s algorithm takes notice, and recommends those videos to kids. Kids keep clicking on them, and keep being offered more of the same. Which means video makers keep making those kinds of videos—hoping kids will click.

This is, in essence, how all algorithms work. It’s how filter bubbles are made. A little bit of computer code tracks what you find engaging—what sorts of videos do you watch most often, and for the longest periods of time?—then sends you more of that kind of stuff. Viewed a certain way, YouTube Kids is offering programming that’s very specifically tailored to what children want to see. Kids are actually selecting it themselves, right down to the second they lose interest and choose to tap on something else. The YouTube app, in other words, is a giant reflection of what kids want. In this way, it opens a special kind of window into a child’s psyche.

But what does it reveal?

“Up until very recently, surprisingly few people were looking at this,” says Heather Kirkorian, an assistant professor of human development in the School of Human Ecology at the University of Wisconsin-Madison. “In the last year or so, we’re actually seeing some research into apps and touchscreens. It’s just starting to come out.”

Kids’ videos are among the most watched content in YouTube history. This video, for example, has been viewed more than 2.3 billion times, according to YouTube’s count:

You can find some high-quality animation on YouTube Kids, plus clips from television shows like Peppa Pig, and sing-along nursery rhymes. “Daddy Finger” is basically the YouTube Kids anthem, and ChuChu TV’s dynamic interpretations of popular kid songs are inescapable.

Many of the most popular videos have an amateur feel. Toy demonstrations like surprise-egg videos are huge. These videos are just what they sound like: Adults narrate as they play with various toys, often by pulling them out of plastic eggs or peeling away layers of slime or Play-Doh to reveal a hidden figurine.

Kids go nuts for these things.

Here’s a video from the YouTube Kids vloggers Toys Unlimited that’s logged more than 25 million views, for example:

The vague weirdness of these videos aside, it’s actually easy to see why kids like them. “Who doesn’t want to get a surprise? That’s sort of how all of us operate,” says Sandra Calvert, the director of the Children’s Digital Media Center at Georgetown University. In addition to surprises being fun, many of the videos are basically toy commercials. (This video of a person pressing sparkly Play-Doh onto chintzy Disney princess figurines has been viewed 550 million times.) And they let kids tap into a whole internet’s worth of plastic eggs and perceived power. They get to choose what they watch. And kids love being in charge, even in superficial ways.

“It’s sort of like rapid-fire channel surfing,” says Michael Rich, a professor of pediatrics at Harvard Medical School and the director of the Center on Media and Child Health. “In many ways YouTube Kids is better suited to the attention span of a young child—just by virtue of its length—than something like a half-hour or hour broadcast program can be.”

Rich and others compare the app to predecessors like Sesame Street, which introduced short segments within a longer program, in part to keep the attention of the young children watching. For decades, researchers have looked at how kids respond to television. Now they’re examining the way children use mobile apps—how many hours they’re spending, which apps they’re using, and so on.

It makes sense that researchers have begun to take notice. In the mobile internet age, the same millennials who have ditched cable television en masse are now having babies, which makes apps like YouTube Kids the screentime option du jour. Instead of being treated to a 28-minute episode of Mr. Rogers’s Neighborhood, a toddler or preschooler might be offered 28 minutes of phone time to play with the Daniel Tiger’s Neighborhood app. Daniel Tiger’s Neighborhood is a television program, too—a spin-off of Mr. Rogers’s—aimed at viewers aged 2 years old to 4 years old.

But toddlers and preschoolers are actually pretty separate groups, as far researchers are concerned. A 2-year-old and a 4-year-old might both like watching Daniel Tiger, or the same YouTube Kids video, but their takeaway is apt to be much different, Kirkorian told me. Children under the age of 3 tend to have difficulty taking information relayed to them through a screen and applying it to real-life situations. Many studies have reached similar conclusions, with a few notable exceptions. Researchers recently discovered that when a screentime experience becomes interactive—Facetiming with Grandmère, let’s say—kids under 3 years old actually can make strong connections between what’s happening onscreen and offscreen.

Kirkorian’s lab designed a series of experiments to see how much of a role interactivity plays in helping a young child transfer information this way. She and her colleagues found striking learning differences among what young children learned—even kids under 2 years old—when they could interact with an app versus when they were just watching a screen. Other researchers, too, have found that incorporating some sort of interactivity helps children retain information better. Researchers at different institutions have different definitions of “interactivity,” but in one experiment it was an act as simple as pressing a spacebar.

“So there does seem to be something about the act of choosing, having some kind of agency, that makes a difference for little kids,” Kirkorian says. “The speculative part is why that makes a difference.”

One idea is that kids, especially, like to watch the same things over and over and over again until they really understand it. I watched the Dumbo VHS so many times as a little kid that I would recite the movie on long car rides. Apparently, this is not unusual—at least not since the age of VCRs and, subsequently, on-demand programming and apps. “If they have the opportunity to choose what they’re watching, then they’re likely to interact in a way that meets their learning goals,” Kirkorian says. “We know the act of learning new information is rewarding, so they’re likely to pick the information or videos that are in that sweet spot.”

“Children like to watch the same thing over and over,” says Calvert, of Georgetown. “Some of that is a comprehension issue, so they’ll repeatedly look at it so they can understand the story. Kids often don’t understand people’s motives, and that’s a major driver for a story. They don’t often understand the link between actions and consequences.”

Young kids are also just predisposed to becoming obsessive about relatively narrow interests. (Elephants! Trains! The moon! Ice cream!) Around the 18-month mark, many toddlers develop “extremely intense interests,” says Georgene Troseth, an associate professor of psychology at Vanderbilt University. Which is part of why kids using apps like YouTube Kids often select videos that portray familiar concepts—ones that feature a cartoon character or topic they’re already drawn to. This presents a research challenge, however. If kids are just tapping a thumbnail of a video because they recognize it, it’s hard to say how much they’re learning—or how different the app environment really is from other forms of play.

Even the surprise-egg craze isn’t really novel, says Rachel Barr, a developmental psychologist at Georgetown. “They are relatively fast-paced and they include something that young children really like: things being enclosed and unwrapped,” she told me. “I have not tested it, but it seems unlikely that children are learning from these videos since they are not clearly constructed.”

“Interactivity is not always a good thing,” she added.

Researchers differ on the degree to which YouTube Kids is a valuable educational tool. Obviously, it depends on the video and the involvement of a caregiver to help contextualize what’s on screen. But questions about how the algorithm works also play a role. It’s not clear, for instance, how heavily YouTube weighs previous watching behaviors in its recommendation engine. If a kid binge-watches a bunch of videos that are lower quality in terms of learning potential, are they then stuck in a filter bubble where they’ll only see similarly low-quality programming?

There isn’t a human handpicking the best videos for kids to watch. The only human input on YouTube’s side is to monitor the app for inappropriate content, a spokesperson for YouTube told me. Quality control has still been an issue, however. YouTube Kids last year featured a video that showed Mickey Mouse-esque characters shooting one another in the head with guns, Today reported.

“The available content is not curated but rather filtered into the app via the algorithm,” said Nina Knight, a YouTube spokesperson. “So unlike traditional TV, where the content is being selected for you at a specified time, the YouTube Kids app gives each child and family more of the type of content they love and anytime they want it, which is incredibly unique.”

At the same time, the creators of YouTube Kids videos spend countless hours trying to game the algorithm so that their videos are viewed as many times as possible—more views translate into more advertising dollars for them. Here’s a video by Toys AndMe that’s logged more than 125 million views since it was posted in September 2016:

“You have to do what the algorithm wants for you,” says Nathalie Clark, the co-creator of a similarly popular channel, Toys Unlimited, and a former ICU nurse who quit her job to make videos full-time. “You can’t really jump back and forth between themes.”

What she means is, once YouTube’s algorithm has determined that a certain channel is a source of videos about slime, or colors, or shapes, or whatever else—and especially once a channel has had a hit video on a given topic—videomakers stray from that classification at their peril. “Honestly, YouTube picks for you,” she says. “Trending right now is Paw Patrol, so we do a lot of Paw Patrol.”

There are other key strategies for making a YouTube Kids video go viral. Make enough of these things and you start to get a sense of what children want to see, she says. “I wish I could tell you more,” she added, “But I don’t want to introduce competition. And, honestly, nobody really understands it. ”

The other thing people don’t yet understand is how growing up in the mobile internet age will change the way children think about storytelling. “There’s a rich set of literature showing kids who are reading more books are more imaginative,” says Calvert, of the Children’s Digital Media Center. “But in the age of interactivity, it’s no longer just consuming what somebody else makes. It’s also making your own thing.”

In other words, the youngest generation of app users is developing new expectations about narrative structure and informational environments. Beyond the thrill a preschooler gets from tapping a screen, or watching The Bing Bong Song video for the umpteenth time, the long-term implications for cellphone-toting toddlers are tangled up with all the other complexities of living in a highly networked on-demand world.


Related Video


* Unlike YouTube’s main website, YouTube Kids does not use an individual child’s geographic location, gender, or age to make recommendations, a spokesperson told me. YouTube Kids does, however, ask for a user’s age range. The YouTube spokeswoman cited the Children's Online Privacy Protection Rule, a Federal Trade Commission requirement for operators of websites aimed at kids under 13 years old, but declined to answer repeated questions about why the YouTube Kids algorithm used different inputs than the original site’s algorithm.

Snopes Faces an Ugly Legal Battle
July 25th, 2017, 01:24 PM

On Monday, the editorial staff of Snopes.com wrote a short plea for help. The post said that the site needed money to fund its operations because another company that Snopes had contracted with “continues to essentially hold the Snopes.com web site hostage.”

“Our legal team is fighting hard for us, but, having been cut off from all revenue, we are facing the prospect of having no financial means to continue operating the site and paying our staff (not to mention covering our legal fees) in the meanwhile,” the note continued.

It was a shocking message from a website that’s been around for more than 20 years—and that’s become a vital part of internet infrastructure in the #fakenews era. The site’s readers have responded. Already, more than $92,000 has been donated to a GoFundMe with a goal of $500,000.

So, what’s going on? Well, it probably won’t surprise you that there’s a startup tech company and a lawsuit involved. There are claims and counterclaims. But if you want to know the gory details that are available in the court filings, here we go.

Snopes began in the early 1990s as a small website built by the husband-and-wife team of David and Barbara Mikkelson. Snopes was what you sent to your cousins who circulated crazy conspiracy theories from their Hotmail accounts. In 2003, the Mikkelsons  founded a parent company, Bardav, for the site.

All the way up into the 2010s, it had that look and feel, too, of a previous era of the internet. And perhaps because of that, its pronouncements on the veracity of subjects had a kind of authority that other media fact-checkers lacked. People, at least as many as possible in today’s crazed informational environment, trusted Snopes.

The founders divorced in 2015, some titillating details of which became public. Both founders received 50 percent of the company.

In the summer of that year, Bardav had entered into an agreement with a newish San Diego company called Proper Media to “provide content and website development services as well as advertising sales and trafficking” to Snopes. Proper Media’s principals were Chris Richmond, who co-owns a wiki called TV Tropes, and Drew Schoentrup, both now described in court filings as residents of Puerto Rico (more on that shortly).* Each of these men had a 40 percent share in the company. They were joined by three other people who had smaller equity stakes: Tyler Dunn, Ryan Miller, and Vincent Green.

In July 2016, Barbara Mikkelson sold her half of Bardav to these five men, leaving her ex-husband with five new partners in the company. Because Bardav was an S corporation, its shareholders had to be people, not other companies. So, the stock purchase agreement between Mikkelson and the men assigned them each equity on the same split that they had in Proper Media.

Diamond Creek Capital financed a big chunk of the deal with help from Barbara Mikkelson herself. Each of the five men in on the deal from the Proper Media side signed personal-liability notes with Diamond Creek Capital.

For a time, it seemed as if the arrangement was working out. The San Diego Union Tribune visited the Proper Media offices, out of which Snopes employees were working. The story featured Vincent Green, a former Marine who’d been an intern only months before, and Brooke Binkowski, the site’s managing editor and a long-time journalist.

“Before we came on board, there was not even a content-management system for the site,” Green told the paper. “It was an excruciating process for developing content. What you see now is our quick and dirty change-over from 20 years of bad code to something more responsive and functional.”

But behind the scenes, there was trouble. Proper Media’s CEO and president had moved to Puerto Rico, according to a cross-complaint filed by Green, and corroborated by their own filings. They set up a separate company there, which Green claims was a tax-avoidance scheme that he told them he was uncomfortable with.

Meanwhile, in a story as old as media, the site’s editors worried that the co-owners didn’t understand what Snopes was, and that they only wanted to juice its revenues, so they could sell it.

On February 18—in a much disputed series of events—Green and Proper Media’s largest shareholders, Richmond and Schoentrup, had a contentious meeting. In the weeks that followed, Green either left or was forced out, and he went to work at Bardav, which is to say Snopes, where he remains.

On March 10, in an action that Proper Media disputes, David Mikkelson canceled the contract that had been in place governing interactions between Bardav and Proper Media. Mikkelson claims that he had a right to do so as CEO and sole director. Proper Media says that he could not because it was understood that Drew Schoentrup was a director of the company as well, even though he had not been elected through a formal process.

Also in dispute are Green’s shares, which when combined with Mikkelson’s, would give the two of them putative control of the company. Proper Media contends that, more or less, the shares belonged to Green and he was just holding them on behalf of the company.

There are many other claims and counterclaims flying around the filings related to the lawsuit. It’s not interesting to go through all of them in detail, but what can be said: This is a mess.

Proper Media’s lawyer, Karl Kronenberger, told me that they’ve alleged that “David Mikkelson has engaged in gross financial, technical, and corporate mismanagement.” Mikkelson told me that Proper Media “continue to hold themselves out as authorized advertising representatives. They have continued to collect the revenue and they have not paid us any advertising revenue.”

What does the future hold for Snopes? That could become slightly more clear next Friday, when there is a hearing in San Diego to address competing motions. Mikkelson is seeking an injunction to force Proper Media to hand over control of the site. Meanwhile, Proper Media is seeking to remove Mikkelson as a director of Bardav.

In the meantime, it looks like the GoFundMe will at least cover the site running for a while longer, but based on conversations with those who know the site’s financial picture, Snopes’s operating expenses are close to $100,000 a month. If a resolution to the dispute isn’t reached soon, it could mean the end of both Proper Media and Snopes.

Which would be a terrible end for the kind of website that bracingly defied the logic of corporate digital media. It hadn’t pivoted to video. It was a site people trusted. It was technologically unsophisticated. It was profitable.

Stay tuned.


* This article originally stated that Chris Richmond founded TV Tropes. We regret the error.