In theory, superhero fans should be delighted by Thursday’s news that Disney, in one one of the biggest corporate deals in history, is acquiring most of 21st Century Fox for around $52 billion. One of the major creative consequences the move is that it will bring together the hitherto scattered intellectual properties of Marvel Comics, allowing for a consolidation of the Marvel Cinematic Universe into one coherent entity. Hitherto, Disney had owned most of Marvel’s vast pantheon of heroes, including Iron Man, Thor, and Black Panther, while Fox owned the rights to X-Men, the Fantastic Four, and Deadpool. Now, all these characters can be brought under the same roof. Rotten Tomatoes described the merger as “a win” for Marvel fans since it’ll give them more X-Men, more Fantastic Four, more Silver Surfer, more Doctor Doom, and more crossovers with these characters.
If you’ve ever dreamed of an X-Men versus the Avengers movie, then the Disney/Fox union is cause for glee:
While fans are exhilarated, there’s little in the news to cheer filmmakers like Ridley Scott or Martin Scorsese, both of whom have expressed dismay at the domination of superhero films (although Scorsese has perhaps surrendered to commercial reality by agreeing to produce a Joker film). Nor is it good news for those concerned about the increasing concentration of corporate power in a few hands. As Harvard literary scholar (and comic book fan) Stephanie Burt noted:
But this deal might be bad even for those, like superhero fans, who are welcoming it. Under the control of Disney, the Marvel Cinematic Universe is a hugely popular but ultimately cookie-cutter franchise, with little tonal diversity from movie to movie. Each movie follows the same mixture of cartoonish fisticuffs leavened with jokes and an epic, universe-threatening CGI ending.
One advantage of having different studios making superhero films is that they bring different sensibilities. As Alex Abad-Santos of Vox notes,
Fox’s X-Men aren’t locked into that [Disney] system, nor does there seem to be an overarching narrative that the studio wants to accomplish with its X-Men movies the way Marvel wants to with its Avengers, leaving individual X-Men movies to establish their own distinctive approaches and personalities. X-Men: First Class is a stylish, charming superhero period piece set in the Cold War era. X-Men: Days of Future Past plays with time travel and the idea of changing history. Deadpool is a raunchy comedy full of butts, murder, and jokes about chimichangas, while this year’s apocalyptic Logan is a bloody Western that received rave reviews.
Yes, the freedom that Fox has given the X-Men has resulted in some clunkers, like 2009’s X-Men Origins: Wolverine. But it’s also resulted in superhero films that take risks, delve into different genres, and play around with different modes of storytelling.
Abad-Santos’s point can be pushed further: The very history of Marvel Comics shows that trying to shoe-horn every character into a single universe smothers creativity.
The fact that Marvel Comics comprises a single “universe” is due to the creativity of two men, artist Jack Kirby and writer/editor Stan Lee. When Kirby and Lee teamed up in the late 1950s, Marvel (then known as Atlas) was a second-string company that specialized in imitating genres made popular by more successful firms: teen humor, romance, horror, and western. Atlas/Marvel produced many comics, but they were nearly all imitative.
In 1961, Kirby and Lee broke from this pattern by creating innovative superhero comics, characterized by angst-ridden heroes and flights of cosmic fantasy. Starting with the Fantastic Four, they soon proliferated into a vast array of characters including the Hulk, Thor, the Avengers, Iron Man, Black Panther, the Inhumans, the Silver Surfer and scores of others. While Marvel employed other artists, almost all these characters came out of the Kirby/Lee team (with artist Steve Ditko co-creating Spider-Man and Doctor Strange with Lee).
The fact that there were so few people working on these comics was partly an accident of creativity and partly caused by economics. Kirby had a powerhouse imagination and had been creating characters and pioneering genres since the early 1940s, when he and Joe Simon co-created Captain America. But the burst of creativity was also fueled by the very low page rates that Marvel payed. Kirby was a freelancer, and to support his family he worked at a prodigious rate: In the early 1960s, he often produced 100 pages a month, four or five times the rate of most artists.
While Kirby was the ideas-generating dynamo populating Marvel Comics, it was Lee’s editorial instincts that led to a vast array of characters forming a universe. Lee’s genius was in advertising and promotion. He discovered the value of having characters cross-pollinate, and constantly encouraged Kirby to plot stories where the Fantastic Four would meet-up with the X-Men or Spider-Man. The fact that Kirby created almost all the characters anyway made this sort of mixing and matching easier.
Most of the early crossovers in Marvel Comics had the feeling of casual cameo appearances. The fusion of all these characters into a coherent universe came later, after Kirby left 1970 in a dispute over not receiving credit for his work. Post-Kirby, Marvel was taken over by a second wave of epigonic creators like Roy Thomas and Jim Shooter, who were interested in bringing continuity to the Marvel universe. As a result, Marvel in the 1970s and 1980s became increasingly imitative of its earlier success, with fewer new characters created. Tellingly, most of the big Marvel successes in the 1970s (Conan, Tomb of Dracula, Star Wars) were based on properties created by other firms or in the public domain. Marvel never rediscovered its creativity, and continues to run on the fumes of the Kirby and Lee years.
In 1982, Kirby teamed up with writer Steve Gerber (creator of Howard the Duck) a comic series satirizing Marvel Comics. In Destroyer Duck, the avian superhero fights an evil corporation called Godcorp Ltd. whose slogan is “Grab It All, Own It All, Drain It All.” The Godcorp of 2017 is Disney. With Thursday’s deal, it looks like Hollywood is about to replicate the same mistakes that Marvel Comics made in the 1970s. If you think superhero franchises are dull copycats now, just wait until Marvel movies are all under one roof. With the imperative to bind every character together, any possibility of stylistic and narrative diversity will be squashed.
The superhero genre is already a very specialized niche, even in its most adventurous works. The variation that exists tends to be stylistic: the Gothic grotesqueries of Gotham City in Tim Burton’s Batman (1989) or the Art Deco nightmare vision of Sam Raimi’s Darkman (1990). With Disney absorbing Fox, even stylistic variety of this sort is likely to be squashed, replaced instead by movies that constantly reshuffling the same deck of cards. There will be team-ups and cross-fertilizations galore: the X-Men versus the Avengers, Spider-Man meets Wolverine, Black Panther allying with the Hulk. No movie will be self-contained. They will all be part of the franchise tapestry, planned out years in advance by Disney. Each movie will serve as a trailer for the next.
There will be no narrative finality in the superhero genre: It won’t matter if characters live or die, since they will always be resurrected for future installments. And these movies will dominate the box office, further crowding out big-budget films of other genres and increasingly pushing independent films out of theaters altogether, to debut on Netflix and Hulu (in acquiring Fox, Disney will also own latter). In announcing the deal, Disney CEO Bob Iger said, “The acquisition of this stellar collection of businesses from 21st Century Fox reflects the increasing consumer demand for a rich diversity of entertainment experiences that are more compelling, accessible and convenient than ever before,” This is pure corporate double-speak. Disney’s latest corporate takeover will dilute what little “diversity” is left in superhero movies and Hollywood as a whole. We will all live in Marvel’s universe.
Two things that the political press excels at are intensive (I would say excessive) speculation about who will win a particular race, and overconcluding about its result. (Paying no attention to the first is an excellent way to save time.) Yet while speculation is harmless (if people lay money on the basis of it, that’s their problem), overconcluding can lead to unwarranted expectations and major disappointments—even to a sense that the election must have been stolen or rigged, a disillusionment with politics. Which could then lead to non-participation, which as we’ve seen can tip a race in a certain direction.
While the celebrating of the remarkable victory of Doug Jones over Roy Moore for Alabama’s U.S. Senate seat vacated by Jeff Sessions was certainly warranted, overconcluding is rampant. (I’m willing to bet that President Trump, who is skilled at transferring blame, has upped his anger at Sessions now for accepting his invitation to be attorney general.) That the Alabama senatorial election came down to those two particular individuals was like a planetary collision. We’re not likely to see another like it for a very long time.
Each of the two final Senate candidates was unique. On the Republican side was an erratic figure who’d been twice removed as chief justice of the state Supreme Court, once for ordering state judges to disobey a U.S. Supreme Court ruling legalizing same-sex marriage (Moore considers homosexuality “evil,” and likened the decision to the Dred Scott case) and also for his defiance of a federal judge’s order that a huge statue commemorating the Ten Commandments be removed from his courthouse. Great numbers of Alabamans considered him the wrong man to represent Alabama in the U.S. Senate even before the disclosure in The Washington Post on November 9 that he had a history of pursuing and molesting little girls when he was in his thirties. One would have to look long and hard to find a candidate who’d been ordered to stay away from a small-town shopping mall because of his predilection for picking up underage waitresses. The views of the racially insensitive (I’m being kind) Roy Moore are antithetical to a great number of Alabama Republicans who want to see the state shed the detritus of the George Wallace era and move into the 21st century. They want to see the state make further advances in education, science, and high tech. Moore wasn’t the leader to take them there.
On the other hand, Doug Jones was the unusual mix of a moderate Democrat with deep ties in the African-American community. Appointed a U.S. attorney in 1997 by Bill Clinton, Jones soon prosecuted and won the conviction of the remaining two Ku Klux Klan members who’d plotted the infamous 1963 bombing of a Birmingham Baptist church that killed four black girls. No one before Jones had taken up the case and he thus became a hero among Alabama blacks. Unlike most white candidates, Jones didn’t show up in black communities as a stranger. Yamiche Alcindor of The New York Times pointed out on Morning Joe that while white candidates usually go into black areas “yelling about incarceration” Jones talked about the same kinds of bread-and-butter issues—jobs and health care—that blacks care about as much as whites do.
Moore, with his firm backing by Trump (after some wavering), was enough of a threat to get black voters, especially black women, to vote in uncommon percentages (despite the state’s efforts to suppress the black vote) for Jones. Trump’s at best cavalier treatment of blacks and his retrograde domestic policies were having their impact. Moore of course also had the prominent backing of the scruffy Steve Bannon, who saw Moore as a useful tool in his war on the Republican establishment and who was of no comfort to African Americans. Much that Bannon talked about in Alabama, such as his extended attack on Mitt Romney, was of little help to Moore. Bannon the outsider was inept in his public appearances, for instance telling audiences that Alabamans didn’t want outsiders coming in to tell them how to vote.
Democrats’ nationwide jubilation at Jones’s victory, is understandable: It’s much more exciting when the underdog wins, and all the more so in this case because this triumph had been pulled off in a deeply conservative state; the last time there’d been a victorious Democrat was 25 years ago (and that one, Richard Shelby, turned Republican in 1994). Jones defeated a man unqualified for the office backed by a president unqualified for the office. He ran a picture-perfect campaign, getting himself all over the state and going into areas nominally unlikely to support him; Moore was missing from the contest (and at some point from the state) for the last two weeks, showing up just before election day.
The enthusiasm of Democrats to cast their ballots, like that displayed in Virginia a month before, was taken as a highly encouraging sign for Democrats for the 2018 midterm elections. But while a blue wave may well be forming across the nation, and while Trump himself might be the Democrats’ strongest weapon in 2018, there are grounds for pause before assuming that next November will produce a series of Alabamas. That contest had several unique features. First, given the vast chasm between the qualifications and records of the two Alabama candidates, the Democrat’s victory was quite narrow: Jones prevailed by only one and a half points, or 20,715 votes. He was the proverbial dog walking on his hind legs.
Moreover, Jones’s lead in votes was outnumbered by 22,811 write-ins—most of them for the hugely popular University of Alabama football coach—believed to have come from college-educated Republican suburbanites and other voters who couldn’t tolerate Moore but also couldn’t bring themselves to vote for a Democrat. The write-in vote was encouraged over the final weekend by none other than Richard Shelby, who announced that he couldn’t vote for Moore (and likely didn’t want him as a Senate colleague) and so had written in the name of someone else. (Bannon commented later that there’s “a special place in Hell” for people like Shelby.)
People can take all the pleasure they want out of Jones’s so little-speculated victory, but they’d be well-advised to curb their overconcluding. The particular circumstances of that most unusual race are unlikely to be replicated.
“I think any man, if someone comes in the room—gay or straight—their first thought is ‘would I fuck them?’” Glenn Close had been talking about murder mysteries for a while before she delivered this dose of gender essentialism. She told me a nice story about her great aunt, who was sent home in a passing car after getting knocked out by a horse in the 1930s. We were here to talk about her latest movie, an adaptation of Agatha Christie’s Crooked House. While on set her fluffy white dog Pip leapt into an algae-covered fountain because he thought it was solid ground.
In Crooked House Close plays an aunt named Edith, alongside Gillian Anderson and Christina Hendricks and Terence Stamp and Jeremy Irons’s son Max Irons. They’re all members of a sprawling and horrible family who live in a country manor (the same one that’s in the aerial shots of The Remains of the Day). The patriarch of the family has died. Was he murdered, and if so, why? In those plot basics, Crooked House is classic and undististinguished Christie. The ending is where the movie’s value lies, and Close helps land it spectacularly.
The whole production, however, carries the unmistakable whiff of holiday television. Close is just too good for it, something that we’ve learned again and again when she makes her co-stars look like toddlers in a school play. Rose Byrne (her co-star in Damages) is a good actor, but she’s not good enough to seem good next to Glenn Close. Those dalmation puppies? Totally outshone. But Edith is still a solid role for Close. Her best turns always hinge on concealment, so she’s suited to murder mystery.
In Fatal Attraction, all the horror is contingent on her performance: the charm of her initial seduction campaign, the reasonableness of her initial demands, the bloody denouement. In the superb Albert Nobbs, Close played a waiter “living as a man,” as the old phrase went, buttoning all Albert’s fears and identity struggles and traumatic past under a stiff shirt and a blank face. Her Gertrude in Mel Gibson’s Hamlet was another triumph of implication, loving her son with a suggestive incestuousness buried under warmth.
Close’s inscrutability is a part of what makes her beauty so unique. Her face is dimensional the way that a cut gem has facets. When I close my eyes and recall our interview now, my memory is of her jaw and the way it met her cheekbone.
She has had an extraordinary life, too: Her father was personal physician to Mobutu Sese Seko, dictator of Zaire, and helped to stem an Ebola outbreak in that country in 1976. Her second cousin is Brooke Shields. When she was seven, her parents joined a controlling cult called the Moral Re-Armament (MRA), now called Initiatives of Change. She travelled with an MRA musical group called Up With People in the late 60s, eventually forming her own group called the Green Glenn Singers.
Close left MRA to attend William and Mary College at 22, where she discovered serious acting. She acted on the stage and television through her twenties, but didn’t participate in a movie until The World According to Garp, when she was 35, playing the 31-year-old Robin Williams’s mother. Her next role was in The Big Chill, and the 80s continued to be kind to her. Fatal Attraction came in 1987, and cemented her position as a rare artist in Hollywood.
A woman like Glenn Close ought to know a thing or two about the human condition. So, though I was wilting a little under the Close gaze, I managed to bring up Hollywood’s sexual harassment scandal. She looked up sharply, with a face totally different to the one she’d been using to discuss British aristocracy and Agatha Christie. “It’s been going on so long.” Close has spent 35 years at Hollywood’s coalface, auditioning and vying for roles and getting them done and promoting them. The casting couch was an occupational reality. “I never was preyed upon,” she said, but “I had a very uncomfortable audition once where someone put his hand on my thigh.” The man was not doing anything with the hand. It was just sitting there like a dead thing on Glenn Close’s leg. Actually, she recalls, it happened twice.
The man was trying to provoke “what they’d call sexual chemistry,” Close thinks. If she’d laughed and pushed him away and shown that she wasn’t intimidated, she said, that would have been a desired reaction. She said such behavior is boundary-testing, but also thinks it is animal, essential. “It’s like putting a dog in with a bitch and seeing if he wants to hump her.”
That behavior, Close feels, “is part of the male DNA.” Men’s sexual aggression is “just the way it is.” Now, “after all these centuries,” men don’t “act on that; most of them don’t.” And exposure is the way towards progress: “I think if it’s blown open and they know it’s not acceptable anymore and there will be consequences.”
It’s counterintuitive these days to refer to DNA when talking about male behavior, almost taboo or retrograde. Most young people talk about socialized masculinity, and the expectations placed on men and boys to act up to power dynamics established long before their birth. But Close is clear on this. “I think it’s in the DNA,” she repeats.
But women must be supported, and their paralyzed response to harassment understood. “I think what happens to these women who are powerless, whom people ask, ‘Why don’t you just say fuck you and walk out?’—they’re frozen,” she said. “It’s easy to judge, but no. I can really understand. You don’t know what to do.” But there’s no surprise: “I’m not shocked.”
The future looks redeemable, however, to Glenn Close. “I don’t think we can go back. I mean, what’s the alternative? There’s no alternative.”
Using taxpayer dollars, the Environmental Protection Agency has hired a cutting-edge Republican PR firm that specializes in digging up opposition research to help Administrator Scott Pruitt’s office track and shape press coverage of the agency.
According to federal contracting records, earlier this month Pruitt’s office inked a no-bid $120,000 contract with Definers Corp., a Virginia-based public relations firm founded by Matt Rhoades, who managed Mitt Romney’s 2012 presidential campaign. Following Romney’s defeat, Rhoades established America Rising, an ostensibly independent political action committee that works closely with the Republican National Committee and Republican candidates to mine damning information on opponents. Other higher-ups at Definers include former RNC research director Joe Pounder, who’s been described as “a master of opposition research,” and senior vice president Colin Reed, an oppo-research guru billed as “among the leaders of the war on [Sen. Elizabeth] Warren.”
This for-profit consulting firm offers a variety of public relations services such as digital strategy, political consulting, and media relations. According to its website, Definers’ clients include Fortune 500 corporations, political groups, and nonprofits. In the past, both Marco Rubio and John McCain used their services, and since the 2016 election so has Rep. Diane Black (R-Tenn.). The client list for America Rising includes the RNC, Republican candidates, such as Sen. Pat Toomey (R-Penn.), and super-PACs such as the Mitch McConnell-linked Senate Leadership Fund and Karl Rove’s American Crossroads.
The company also specializes in using the press and social media to “validate your narrative.” According to the company’s website, one of the tools to help do this is its “Definers Console” media-tracking technology. Reed said his firm contracted with Pruitt’s office at the EPA, which is the first governmental client to pay for the Definers Console. The technology promises “war room” style media monitoring, analysis and advice, according to marketing materials. A brochure for the Console assures users that they will be able to “monitor for potential crises, as well as to track their message dissemination, relevant responses to their messaging, and what competitors’ actions have been.”
Besides monitoring media, users will get analysis and input from their employees whose experience in political campaigns and the business world helps create a unique approach “to intelligence gathering and opposition work. This experience informs the way we gather, synthesize, and disseminate information.”
“Definers has been contracted to provide media monitoring services through our Console by the EPA,” Reed says. “We provide the same service to a number of corporate and non-profit organizations.”
In response to Mother Jones’ questions about the Definers contract, EPA spokesperson Nancy Grantham said, “The Definers contract is for media monitoring/newsclip compilation.” To a question on how the contract came about, she said: “The contract award was handled through the EPA Office of Acquisition Management.”
USASpending.gov, a website that tracks federal spending, shows that in early 2016 the EPA signed a $207,000 contract with a firm called Bulletin Intelligence, requesting similar services. Bulletin is owned by public relations giant Cision, a well-known international PR firm. According to OpenSecrets.org’s expenditure data, Bulletin is not political and has not done any recent work for any candidates or PACs. The contract expired in February.
Definers also recently launched a new venture with the global law firm Dentons, which describes itself as combining “political intelligence, legal advisors, campaign-style tactics, lobbying, governmental affairs, research, and communications into one unique offering” to help clients.
The career of at least one of Pruitt’s staffers has overlapped with the Republican operatives at Definers. Jahan Wilcox, who previously worked for Marco Rubio’s presidential campaign and in rapid response for the Republican National Committee, is now a spokesperson for the EPA.
Wilcox, along with other political staff in Pruitt’s EPA press shop, has had some contentious interactions with the press. In one case, when Eric Lipton from The New York Times was confirming facts for an investigation into the EPA’s industry-friendly approach to chemical regulation, EPA spokesperson Liz Bowman diverted the discussion to other outlets’ reporting rather than confirming his questions. Wilcox added, “If you want to steal work from other outlets and pretend like it’s your own reporting that is your decision.”
On another occasion, shortly after the Associated Press reported on the Superfund sites affected by Hurricane Harvey, the EPA went after one of the bylined reporters in a statement, and an unnamed official later admitted to removing one of the bylined AP reporters from the agency’s press list, saying, “We don’t think he’s a trustworthy reporter.” When Pruitt has faced criticism, the EPA highlights friendlier stories from conservative outlets—including Breitbart.
Pruitt has come under fire for a general lack of transparency at the EPA. The latest example is his trip promoting natural gas in Morocco. The public learned of his travels when his office posted a media release, causing confusion over why the EPA would not notify reporters ahead of time. This means that information on Pruitt’s activities in Morocco will be restricted to the EPA’s own spin.
The EPA’s work with groups affiliated with Pounder predate this contract. On a handful of occasions, the EPA has promoted positive coverage of Pruitt’s actions from the news-aggregation website Need To Know Network. Earlier this year, the website wrote a series of stories designed shed positive light on the controversial administrator. In one story, the site describes Pruitt as “busy racking up accomplishments that both protect Americans and save millions in taxpayer dollars.” Another congratulated Pruitt for moving ahead with plans to open Alaska’s Bristol Bay to mining, writing it was “a move that will prove to be a massive job creator for President Trump and Pruitt.”
The Need to Know Network was started by Pounder and other operatives connected to America Rising and Definers Corp.
Patagonia, the outdoor sportswear company, made a bold business decision last week: They threatened to sue Donald Trump. “The President Stole Your Land,” the company declared on its website, an angry rebuke of Trump’s decision to shrink two national monuments in Utah. The removal of federal environmental protections for much of Bears Ears and Grand Staircase-Escalante National Monuments is “the largest elimination of protected land in American history” and “illegal,” the site read. “Tell the Administration that they don’t have the authority to take these lands away from you.”
The company’s sales reportedly skyrocketed. According to GQ, Patagonia gear sold by non-Patagonia retailers sold six times faster than normal on the day after the announcement; for the next two days, sales were five times higher than usual. That wasn’t the company’s intention, according to Vincent Stanley, Patagonia’s director of philosophy. “We were not driven by thinking that this was what our customers wanted,” said Stanley, who has been with Patagonia since 1973 and co-wrote a book about the company’s pro-environment business strategy. “This action is just built into who we’ve been as a company for a long time.”
Patagonia is a uniquely environmentally friendly company, but other companies should take note. Americans increasingly want brands to prove that they’re helping, not hurting, the environment. A 2016 study in the International Journal of Communication showed that consumers are supporting companies they consider environmentally friendly and avoiding those they consider irresponsible. Corporations fear this wrath such that, when facing boycott or “buycott” campaigns, they change their behavior 25 percent of the time. That’s a much more tangible impact than achieved through protests of the Trump administration, which has followed through on much of Trump’s environment-related campaign promises to gut federal clean air, water, and climate change regulations.
So why are environmental activists still focusing their energy on changing the Trump administration’s behavior? They should shift resources and strategy toward corporate environmental activism—a proven strategy which can make meaningful differences in emissions reductions, and have the side effect of changing federal policy, too.
Former Congressman Henry Waxman spent 40 years in the House of Representatives trying to convince the government to address pollution and global warming. He knows what success looks like: He was the author of the 1990 Clean Air Act and the Safe Drinking Water Act of 1996, both of which are now law. Waxman also knows that things are different now under Trump’s anti-regulatory regime. “When you can’t rely on government at all,” he told me, “you have to find other ways to accomplish your goal.”
Waxman, who retired from Congress in 2014, is now the chairman of the environmental group Mighty, which is trying to build a consumer-driven environmental advocacy movement. Their latest corporate target is Tyson Foods, which they assert is responsible for most of the water pollution across the Midwest. Rivers and streams in the heartland have long been plagued by fertilizer runoff from agricultural operations; the water supply in Des Moines, Iowa, for example, is often polluted with nitrate, which can be fatally toxic for babies. “When we’re talking about people who live in Trump country, they can’t look to government to help them because the policy of the Trump administration is not to regulate and not to protect public health, but to let industry to do whatever is necessary for them,” Waxman said. “And so it’s important to have direct campaigns on these corporations to talk about the environmental impact of agriculture.”
Even though Tyson Foods does not own or operate many farms that pollute, the Big Meat behemoth buys its products from farms, and are therefore the main drivers of their demand. That’s why Mighty has been pressuring Tyson to better police its supply chain, recently delivering a petition with 70,000 signatures asking Tyson not to buy from farms responsible for widespread water pollution. According to Mighty campaign director Lucia von Reusner, the group also helped mobilize a rural Kansas community that successfully prevented Tyson from opening a large chicken processing plant near their town.
While Tyson has not yet pledged to change its supply chain practices, it’s been steadily taking steps to improve its environmental image. In April, it hired a “chief sustainability officer.” In May, it eliminated the use of antibiotics in its chicken. Last week, the meat company increased its investment in Beyond Meat, a meatless product that contributes less to global warming. “Our strategic intent is to be the world’s best, most sustainable protein supplier,” Tyson’s new CEO, Tom Hayes, said last month. Von Reusner thinks these shifts were the result of negative press from her group and others, out of fear that consumers will stop buying what they believe to be harmful. “Sometimes your dollar is more important than your vote,” she said.
There have been many consumer-driven victories over the last two decades. Mitsubishi pulled out of an salt-extraction project at a UNESCO World Heritage Site in Mexico after cities in California passed resolutions against the company and people sent 700,000 letters of protest. Seaworld stopped breeding orcas through their breeding programs. Nestlé stopped using palm oil in its supply chain to stop deforestation. Kimberly-Clark created new paper procurement policies to reduce deforestation. And the company that owns Zara, a clothing store, eliminated the use of fur. “The best way to drive change is to hold companies accountable,” Waxman said. “While politicians are becoming less and less accountable to the public, companies are finding they have to be more accountable to earn the public trust.”
There’s a notable limit to this strategy: the fossil fuel industry, which contributes most to climate change and physical pollution. Unlike agricultural, food, and most other corporations, oil, gas, and coal companies are less vulnerable to pressure from consumers because for many people, these fuels are necessity. Car commuters need gas, for instance. This also applies to the utility companies that get their energy from fossil fuel sources. Ideally, consumers would have the option of using a company that gets more of its energy from renewable sources, but most Americans don’t have a choice of utility companies: There’s only one game in town, and people can’t afford to go without electricity or heat. Unless tens of millions of Americans begin fully powering their homes with solar panels, for instance, power utilities won’t feel pressured to move toward more renewable sources. (Von Reusner says Mighty is thinking about ways to make consumer-driven campaigns against fossil fuels and other heavy industries. “We’re testing the model and expanding it,” she said.)
This caveat aside, it’s still far more likely that meaningful change will come from environmental activism targeted at corporations rather than the government—which may well force regulatory changes that activists can’t. For instance, as companies are pressured by consumer activists to change their behavior, they’ll want their competitors to be forced to do same. Thus, they’ll lobby for government regulations to level the playing field—and who, after all, does the Trump administration listen to more than corporations? According to The Guardian, Trump’s EPA has granted almost every single one of the American Petroleum Institute’s demands for regulatory rollbacks. What if, in order to please these industries, Trump instead had to take steps to protect the environment?
Perhaps this is a long shot. But it’s far more realistic than activists’ convincing the Trump administration to do anything about climate change. Trump doesn’t care that tens of millions of Americans want environmentally friendly policies. Corporations do.
Sharon Zhang contributed reporting.
Toward the end of the 2010 World Cup, Julio Grondona made a prediction, or perhaps it was a promise, to a group of journalists in the gilded lobby of Johannesburg’s Michelangelo hotel, the five-star Italian-marble palace where FIFA, soccer’s international governing body, had established its tournament headquarters. Argentina had just been humiliated, 4-0, by the Germans, but Grondona wasn’t worried about the backlash. In 31 years as president of the Argentina’s national soccer association, he’d endured personal scandal, government turmoil, economic collapse, and the ardent passions of the beautiful game’s fans. “Todo Pasa,” read the inscription on his big gold ring. All things pass—all things except, of course, Julio Grondona. “No one is kicking me out until I die,” he told the reporters.
Sure enough, Grondona, at the age of 82, was still the association president when he succumbed to an aneurysm in July 2014, nine months before the U.S. Department of Justice issued the first in a series of indictments placing him (“Co-Conspirator #1”) at the center of a vast criminal network that distributed “well over $200 million” in bribes and soccer licensing kickbacks. Alejandro Burzaco, an Argentine sports marketing executive who forfeited $21.7 million as part of a cooperation agreement with the U.S. government, testified in a Brooklyn federal court last month that negotiations over the media rights to Copa America, the quadrennial South American tournament, took place in Grondona’s room at the Michelangelo.
His signature alone cost $3 million, and Grondona expected equivalent installments each time the tournament was held. He preferred cash—up to $1.2 million a pop by the end—for Copa Libertadores, the continent’s premier annual club competition; for Argentina qualifiers and friendlies; and for fielding Argentina’s best talent, who otherwise might sit international matches out. (Lionel Messi, who attended Grondona’s funeral, allegedly received $200,000 per game, to share with his teammates as he saw fit.) But when it came to the Latin America rights for the 2026 and 2030 World Cups, Grondona asked Burzaco and his partners at Brazil’s TV Globo and Mexico’s Televisa to wire a $15 million payment to Switzerland.
Diego Maradona once noted that Grondona, with his jowly smirk, grey slickback, and great round belly, “sometimes comes off like a mafioso.” That’s certainly true of this trial, which went to jury deliberations Thursday night and which has had, as Bloomberg’s Patricia Hurtado writes, all “the trappings of a grisly organized-crime prosecution.” (One of the defendants allegedly directed throat-slashing gestures at a seated witness.) Luis Bedoya, a former president of Colombia’s soccer federation, recalled being summoned to discuss illegal payments in the cramped back office of a gas station Grondona owned in the Buenos Aires suburb where he grew up. When one of Grondona’s alleged co-conspirators on the FIFA executive committee pushed for Japan to host the 2022 World Cup, Grondona, who had received at least $1 million to support Qatar’s eventually successful bid, accosted him in the bathroom between votes.
“There are written rules and there are invisible codes,” Burzaco testified, “and Julio Grondona was mainly concerned with people respecting those invisible, not written codes.” The senior vice president of FIFA and chairman of its finance and media committees, Grondona “knew everything” about everyone, said Burzaco. He had “the last word in the sense of authorizing, giving a green light, to each single payment of bribes. We would run subjects through him, and he would authorize, consider their relative amount of the bribe to one against the other, or sometimes he would keep part of the bribe of someone if he thought he was receiving too much.”
Of the more than 40 soccer officials and sports marketers indicted by the Department of Justice—many of whom have already pled guilty—30 are from Latin America, including all three defendants on trial in Brooklyn. That’s partly a function of geography, and the way the FBI’s investigation branched out from the $6,000-a-month Trump Tower apartment one U.S. soccer official rented out for his cats. But it would be wrong to assume that Julio Grondona is just another gangster with a lifetime supply of Adidas merchandise, or that the specific corruption on display this past month could simply be mapped onto another region. “FIFA,” says David Goldblatt, a soccer historian who currently teaches at Pitzer College, “is the most Latin American organization in global politics.”
Secretary of State Henry Kissinger opened the meeting in which he would bless a political crackdown in Argentina with a few light-hearted quips about soccer. Kissinger was attending the 1976 Organization of American States General Assembly in Chile, where he had, a few years earlier, helped orchestrate a coup d’etat. In March of 1976, Argentina, the last democratic holdout in the Southern Cone, had fallen to the military. Two days after the takeover, FIFA President Joao Havelange, who enjoyed a close working relationship with the military rulers of his native Brazil, heralded the new government, saying that Argentina was “now more ready than ever” to host the upcoming World Cup in 1978.
Kissinger, too, seemed enthusiastic about the tournament’s prospects. “No matter what happens, I will be in Argentina in 1978,” he assured a delegation from the country’s new dictatorship. “Argentina will win.” Kissinger’s counterpart, a navy admiral, expressed doubts. “If you can control an Argentine crowd when Argentina loses,” joked Kissinger, “then you can say you have really solved your security problem.” The rest of their conversation, transcribed in a declassified memorandum obtained by the National Security Archive, has been remembered primarily because of what Kissinger told the Argentines: that “if there are things that have to be done, you should do them quickly.” But what the Argentines told Kissinger is just as significant. “The terrorist problem is general to the entire Southern Cone,” said Foreign Minister Cesar Guzzetti. “To combat it, we are encouraging joint efforts to integrate with our neighbors.”
That may have been the first time anyone informed Kissinger about Operation Condor, the transnational surveillance, rendition, torture, and assassination program the South American dictatorships developed to track and target dissidents beyond their own individual borders. At the time, George H.W. Bush’s Central Intelligence Agency was monitoring the program—and its paid assets’ involvement therein—while the Condor states coordinated through a U.S. military communications system located in the Panama Canal Zone. And once Kissinger had been made demonstrably aware of Condor’s activities, the celebrated statesman would continue to signal his approval to the generals. Two weeks after the September 1976 car-bombing of former Chilean ambassador Orlando Letelier in Washington, D.C.—a Condor mission directly ordered by Chilean dictator Augusto Pinochet—Kissinger met again with Argentine Foreign Minister Guzzetti in New York. “Look, our basic attitude is that we would like you to succeed,” said Kissinger, reiterating that “the quicker you succeed, the better.”
Kissinger’s support for the dirty wars did not end with his term in office in 1977. True to his word, he attended the 1978 World Cup as an honored guest of the dictatorship. According to a recently released batch of declassified State Department documents, the trip was something of a private diplomatic tour, with Kissinger taking official meetings, posing for photo-ops, and generally working to undermine Jimmy Carter’s “romantic” concerns over human rights. Held under the slogan “We Argentines are right and human,” the entire tournament served as a public relations spectacle for the embattled regime. In addition to the bulldozer slum clearances and egregious public-works graft now synonymous with athletic mega-events, the forced disappearance and grotesque torment of regime opponents redoubled throughout the preparations—and indeed, continued during the tournament. Considerable evidence suggests Argentina’s decisive 6-0 second-round victory over Peru was fixed by the two Condor governments. But in the end, Argentina lifted the trophy—as Kissinger had predicted—in a triumphant final victory over the Netherlands. At a banquet put on by the government the following night, FIFA President Havelange rose a glass to his elated hosts. “Now,” he said, “the world has seen the true face of Argentina.”
Havelange was so pleased with the tournament that he awarded its planner, newly promoted Navy Vice Admiral Carlos Alberto Lacoste, a fast-tracked vice presidency at FIFA. Lacoste became an instrumental ally to Havelange, who fought to prevent Lacoste from being investigated for murder and mass embezzlement once Argentina transitioned back to civilian rule. When Lacoste at last stepped down in 1984, Havelange gave his position to Julio Grondona, the man Lacoste had tapped to run the Argentine soccer association in his absence.
Havelange had campaigned for the FIFA presidency four years earlier on a forward-thinking Third World solidarity platform, jetting across Africa and Asia with cash-stuffed envelopes and promises of inclusive development. Whenever possible, he also brought Pele, the game’s greatest, and most marketable, player. In How They Stole the Game, journalist David Yallop speculates as to where Havelange could have found the budget for such an extensive itinerary. There were the shady deals Havelange, a weapons dealer, had conducted with figures like Bolivian dictator Hugo Banzer. There was also the large, unexplained hole Havelange left in the finances of the Brazilian soccer federation, of which he was the longtime president under military rule. Andrew Jennings reports in Foul! The Secret World of FIFA that Adidas owner Horst Dassler started throwing cash around wildly once it became apparent that Havelange’s gambit might work.
However he pulled off his election, Havelange reached the presidency loaded with debts. He had vowed to expand the World Cup bracket—a mission that Lacoste would later help realize—but the immediate priority was to keep his coalition of smaller, poorer federations intact. For that, Havelange needed money, and for that, he turned to Adidas. Dassler shared Havelange’s ambitions for FIFA and the commercial reach of its product. And together with upstart ad wizard Patrick Nally, he took it upon himself to sell other blue-chip sponsors on that vision: Canon, Gillette, and the big get, Coca-Cola.
The beverage giant agreed to fund Havelange’s new soccer “development” program—which is where, in 1975, future president Sepp Blatter launched his career at FIFA—and to underwrite the exorbitant costs of the 1978 World Cup. But it was the first-of-its-kind exclusivity agreement that transformed soccer into a truly modern global commodity. By segmenting the footballing experience into discrete categories of consumption, FIFA could attract brands commensurate with the multinational scope of its own. “Never, in the history of sponsorship,” boasts West Nally, the ad firm Patrick Nally went on to cofound after a split with Dassler, “has one company obtained so much exposure and promotional benefits from a single sporting involvement, at such reasonable cost.”
The success of the Argentina experiment allowed Havelange to reinvest licensing revenues in the tournament, fulfilling a crucial campaign promise to expand the field for 1982. The following year, Dassler launched International Sports and Leisure (ISL), a functional prototype of the sports marketing firms indicted by the DOJ. With an office right across the street from FIFA’s Zurich headquarters, the company would sell TV and sponsorship rights on FIFA’s behalf, channelling tens of millions of francs in bribes and kickbacks to Havelange; his son-in-law Ricardo Teixeira, president of the Brazilian federation; and Nicolas Leoz, the president of CONMEBOL, South America’s soccer governing body. As long as the game kept growing, it didn’t matter how competitive the contracts were—and when ISL collapsed in 2002, it became clear that they weren’t. Havelange could keep bringing new member states to the table and buying loyalty both above and below it.
With Grondona in charge of FIFA finances, Blatter elevated that ad hoc bagman-politicking into a sophisticated corporate machinery of shameless self-perpetuation. But it was during the 1978 World Cup that Havelange consolidated the basic nexus of patronage politics and corporate corruption exposed by the Department of Justice.
On the poster-board pyramid charts the prosecution likes to show the jury, Grondona’s face gets velcroed onto the highest rung, next to those of his fellow FIFA executive committee members Ricardo Teixeira and Nicolas Leoz. They were the true powers in South American soccer: Grondona, because of his decision-making prowess at FIFA; Teixeira, because of the prestige of joga bonito and his influence with Globo TV and the lucrative Brazilian market; and Leoz, who secured a diplomatic immunity carveout for CONMEBOL’s headquarters in Asuncion, the Paraguayan capital, thanks to what Burzaco, the Department of Justice’s witness, referred to as his “strong links with the Paraguayan government.”
For much of Leoz’s career, that government belonged almost exclusively to Alfredo Stroessner. Leoz was president of the dictator’s favorite team, Libertad, which had gone 21 seasons since last winning the league. Then in 1976, Leoz turned the team over to Fredy Stroessner, Alfredo’s son. Libertad were crowned champions that very season.
Many of the Central American and Caribbean officials on the indictment list came up through the Havelange political apparatus. Trinidad’s Jack Warner, the former president of CONCACAF, worked diligently to assemble the small-island voting bloc that Blatter and Grondona would integrate into their political base and bribe network. Jeffrey Webb, the former banker and CONCACAF vice president, served as Grondona’s deputy on the FIFA finance committee.
But a remarkable cross-section of prominent DOJ targets have connections, both alleged or proven, to the U.S.-backed dirty wars in Latin America. In 2013, Sao Paolo’s Folha newspaper reported that, in the early years of Brazilian dictatorship, Marco Polo del Nero—who resigned from CONMEBOL and FIFA, but has stayed on as Brazil’s federation president—participated in the paramilitary extremist group Command of Communist Hunting. Del Nero and several allies responded to the report by lashing out at the journalist, but did not deny the specific claim. Alfredo Hawit—a former CONCACAF president, FIFA vice president, and FIFA executive committee member—appears in a 1993 state-sponsored human rights report on Honduras, which accuses him of collaborating in unspecified forced disappearances with General Gustavo Alvarez Martinez, the point man for CIA and Argentina intelligence operatives attempting to replicate Operation Condor in Central America. Hector Trujillo, the former Guatemalan magistrate and soccer federation general secretary, sat on the heavily criticized tribunal that overturned the genocide conviction of General Efrain Rios Montt.
In 2014, the left-leaning president of the Peruvian Congress claimed that Manuel Burga, one of the defendants on trial in Brooklyn, was “placed and designated” the Peruvian federation’s president by imprisoned dictator Alberto Fujimori. Another defendant, former CONMEBOL president Juan Angel Napout, inherited a business empire from his father, who accumulated exclusive import licenses through a close friendship with Alfredo Stroessner. The third, Jose Maria Marin, was a legislator and governor with the pro-military ARENA party in Sao Paolo before heading the state’s formidable soccer federation. In 1976, he took to the floor of the assembly, demanding the local government investigate TV Cultura, a free local station, for fomenting “great unrest.” Fifteen days later, Sao Paolo police abducted Vladimir Herzog, the leftist editor-in-chief, and tortured him to death.
The era of market liberalization that coincided with democratic transition in the 1990s only deepened the culture of institutionalized corruption that had developed around soccer in the Americas. Alejandro Burzaco, who testified to paying tens of millions in bribes, made his fortune acquiring media and telecommunications assets during the orgy of insider privatization in early 90s Argentina. Former Honduran President Rafael Callejas, who pled guilty for taking bribes as president of the national soccer federation, pursued a similar agenda of financialization and state service reduction in his country. Luis Chiriboga, the former Ecuadorian federation president, was running Deportivo Quito when he was elected city councilman on a neoliberal Christian Social Democrat ticket in 1992. Paraguayan President Horacio Cartes, an almost comically criminal businessman who campaigned with Alfredo Stroessner’s Colorado Party and has defended the dictator’s legacy since entering office, took over as president of Stroessner’s Club Libertad in 2001 and named its remodeled stadium after Nicolas Leoz, the soon-to-be-extradited former CONMEBOL president. According to Whatsapp messages introduced as evidence in the trial in Brooklyn, Juan Angel Napout traded World Cup tickets for political favors from Cartes and other Paraguayan officials.
Cartes has been held up as an example of the right-wing’s resurgence in Latin America, following a period of widespread social democracy. But what’s lost in this narrative is the extent to which the leaders of the so-called Pink Tide internalized the logic of the post-Cold War governing consensus. Revolution and class war gave way to reform, progress, and growth, to a willingness to work with and within the system. As Bernardo Vasquez and Dadid Cayon write in Futbol Para Todos, Julio Grondona “never before managed to establish such a well-oiled association with the government as the one he constructed” with Argentina’s Cristina Fernandez Kirchner—whose replacement, Mauricio Macri, has been subject to various scandals stemming from his tenure as club president of Boca Juniors. In Brazil, enthusiasm for the World Cup brought President Dilma Rousseff into partnership with Jose Maria Marin, an outspoken advocate for the dictatorship that tortured her.
It’s easy to oversimplify a complex, country-specific political process, but one recurring theme in the decline of the elected Latin American left is its acceptance of, and often complicity with, corruption and reactionary power. When Rousseff met FIFA Order of Merit recipient Henry Kissinger in New York in 2015, she called him “a fantastic person, with a grand global vision.”
As he processed the news that police had hauled off a group of officials from FIFA’s 2015 Congress in Zurich, Argentine sports reporter Sebastian Fest was surprised to find himself thinking, “God bless America.” “The extraterritoriality of U.S. justice disquiets many,” he conceded, but Uncle Sam had at last accomplished what “decades of lamentations and more or less spineless efforts” by Europeans couldn’t. “While we may question the role of the United States as a global watchdog in many respects,” a Swiss soccer scholar told The New York Times soon after, “this is one instance in which the extraordinary power of the Justice Department is working in ways that benefit the world at large.”
The FIFA corruption scandal, then, has been framed as an exception to the United States’s long history of disastrous foreign intervention. But there’s a sense in which cleaning up the mess of past intervention also serves to reinforce the rule. “They were trying to send a message to the world,” says Wayne State University law professor Peter Henning, an expert in the Racketeer Influenced and Corrupt Organizations (RICO) law being deployed in this case. RICO, Henning told me, is “flexible statute, very expansive.” Created to dismantle the mafia, it’s been applied in various contexts, but not, for the most part, to prosecute corporations or financial houses. The decision to invoke it against members of FIFA, Henning says, may stem as much from the Department of Justice’s internal politics as the facts of the case itself. FIFA has no constituency in the United States, besides the pool of sponsors now inching away from its scandals. It doesn’t employ lobbyists in Washington, and it doesn’t invest in think tanks and academic endowments. “This case can become a template for how to pursue international corruption,” says Henning. But it came to fruition because “they were picking out a target that couldn’t defend itself.”
It’s unclear whether the FIFA case signals a broader change in strategy for the Department of Justice, which has been criticized for its reluctance to hold corporate America accountable for criminal wrongdoing. And it’s unclear whether it will even signal a change at FIFA. In 2015, two days after the indictments hit, FIFA named CONMEBOL finance committee member Cristian Varela, a close ally of convicted Chilean federation president Sergio Jadue, to its disciplinary committee. A “Chicago Boys” economist and one-time social companion of Augusto Pinochet’s daughter, Varela has been accused of corrupt dealings relating to the TV equipment and production company he owns, which was privatized in the latter days of the dictatorship and does business with CONMEBOL. Earlier this year, FIFA appointed CONMEBOL president Alejandro Dominguez to occupy Grondona’s former seat on the finance committee. Alejandro is the son of Oswaldo Dominguez Dibb, a wealthy, allegedly criminal businessman, the former president of Club Olimpia, a one-time candidate for president from Alfredo Stroessner’s Colorado Party, and the brother of Stroessner’s son-in-law. In testimony last month, Alejandro Burzaco said that Dominguez’s predecessor Juan Angel Napout asked for bribes on his behalf. Dominguez, explained Napout, was not a very successful businessman.
The Austrian director Michael Haneke has been called a cynic, a nihilist, and a fraud. He’s called himself a “realist.” The question is whether the bleakness of his films represents a recognizable reality or whether his dark manipulations and depictions of violence—especially of suicide, but also torture, murder, and mercy killings—are merely sensational, calling attention to nothing more than themselves. That he works in the grammar of European avant-garde cinema—often employing long takes in which nothing much happens, building toward climactic scenes in which all too much happens—rather than in the vernacular of Hollywood thrillers and action films raises another question: Is he merely preaching to a choir of festival juries and art-house audiences?
The best of Haneke’s films suspend these questions by transcending their status as commentary on or critique of violence. Their despairing visions are mounted on genre forms that they overwhelm. It won’t do to call The Piano Teacher (2001) or Caché (2005) psychological thrillers or The White Ribbon (2009) a parable of German village life on the eve of World War I. They only become stirring by exposing how insufficient those genre labels are to describe them. The violence on the screen—the assault of Erika by her lover in The Piano Teacher, the suicide in Caché, the mutilated boy in The White Ribbon—verge on the unwatchable. With its leave-no-survivors story of home invasion, torture, and murder, Funny Games—the 1997 German-language film Haneke remade in English a decade later—is pure provocation, impossible to watch twice. Perhaps it was easier to shoot twice.
The violence in Haneke’s new film, Happy End, happens mostly off-screen. The film proceeds like a melodrama, but you may leave the theater thinking you’ve just seen the blackest of comedies. The story of three generations of a wealthy French family in the throes of several crises, it brings together themes from Haneke’s most recent films. As in Amour, which ends in a husband’s euthanizing his wife, there is the plight of an elderly patriarch who finds himself at the edge of death that can’t come too soon. Like The White Ribbon, in which abused children turn violent themselves, the new film examines the effect of parents’ failings on their children. And as in Caché, the painful legacy of colonialism hovers over the story, this time through references to Europe’s migrant crisis.
Much of the action in Happy End takes place at the Laurent family mansion in Calais, near the Calais Jungle, an encampment where migrants gathered from 2015 until its evacuation in the fall of 2016. The Laurents own an industrial construction firm, and they may be held responsible for the collapse of a foundation wall that occurs in the film’s first few minutes. The firm is managed by the middle-aged daughter, Anne (Isabelle Huppert), who is grooming her twentysomething son, Pierre (Franz Rogowski), to succeed her. But Pierre was managing the site of the accident and may be implicated in the collapse. His behavior has become erratic, and Anne thinks he’s drinking too much.
Meanwhile, there has been a suicide attempt. The patient, who soon dies, is the ex-wife of Anne’s brother Thomas (Mathieu Kassovitz). Their child, the twelve-year-old Eve (Fantine Harduin), will be coming to live with her father and the rest of the Laurents after a long estrangement. Thomas has remarried, to Anaïs (Laura Verlinden), and they have an infant son. The Laurent paterfamilias, Georges (Jean-Louis Trintignant), is in his mid-eighties and losing his marbles, a condition he frequently points out to his family. Haneke weaves these crises together in an exquisitely escalating mood of unease until a finale that replays the tragedies of the earlier films as farce and earns Happy End its ironic title.
Haneke, who is 75, worked in the theater and television for decades before directing his first feature, The Seventh Continent, in 1989. The story of a couple who commit suicide and give a lethal injection to their daughter, The Seventh Continent was based on a family suicide Haneke had read about in the newspaper—certainly a grounding in reality, if an extreme one. The names Georges and Anne recur in Haneke’s films: They are the names of the couples in The Seventh Continent, Amour, Funny Games, and Time of the Wolf. The last name Laurent also recurs. Haneke’s repetition of names—some variation of Eve often marks an innocent daughter—suggests that his characters are somehow interchangeable. In the long story Haneke tells of middle-class misery and rot, we’re all acting out the same dramas and committing the same transgressions.
An exception is The Piano Teacher, which is less the story of a universal condition than of a grown woman trapped living with her oppressive mother, violently resentful of her students, and harboring secret obsessions. Huppert’s performance as Erika is a triumph of nervy contradictions. In Happy End Huppert is the anchor of the ensemble. Her Anne assumes directorship of the family firm—she’s marrying the banker (Toby Jones) who brokers the transition—works to minimize the payouts to the victim of the construction accident, and tries to maintain a semblance of decorum in the Laurent household.
The Laurents are a family that dines formally, waited on by Jamila (Nabiha Akkari), their Moroccan maid. The dinner table is a site of humiliation for Georges because his memory is failing. When Eve arrives, he repeatedly fails to recognize her or remember why she’s rejoined the family—he last saw her ten years earlier. In the middle of the night he creeps into the garage, drives away, and crashes into a tree. The failed suicide attempt leaves him with a broken ankle and confined to a wheelchair. Later he asks his barber to help him kill himself, and seems to do the same to a group of men he encounters on the street. These scenes come close to sending up Amour, in which Trintignant’s Georges agrees to kill his wife in order to spare her another return to the hospital.
A morbid bond links Georges and Eve in Happy End. They’re the ones connected to death, while the generation between them persists in a selfish state of denial. Anne attends to business, Anaïs lavishes her attention on the infant, and Thomas conducts a dirty-talk cyber-affair with a woman we barely see. Eve has a secret life of her own, streaming videos of obnoxious teenage boys online, surreptitiously making videos of her relatives around the house, and snooping on her father’s laptop. The messages she discovers convince her that her father will abandon her stepmother, that he loves no one, and that she’s destined to wind up in a home. She turns her quiet rage on herself and makes her own suicide attempt.
The central scene of Happy End also links back to Amour. Eve visits Georges’s office at her father’s request. Georges tells her that he’s not interested in asking her why she tried to kill herself, but he does so anyway. She has nothing to tell him in reply. He tells her the story of his wife’s death. Around the time of Eve’s last visit to Calais, at age three, her grandmother had fallen permanently ill. Georges handed over the duties of running the family firm to Anne and devoted himself to his wife’s care. At last, he smothered her. Grandfather telling the story to granddaughter seals their bond and tacitly establishes a pact between them.
Mimicking the structure of a comedy, Happy End culminates in a pair of celebration scenes. It’s here that the subtext creeps to the fore. At Georges’s eighty-fifth birthday party he’s moved by a cello recital, but again shows signs of senility. He represents the old Europe, and his appreciation of culture will disappear with him. Anne and Thomas try to put a good face on the proceedings, but a drunk Pierre announces to the guests that they should enjoy a dish prepared by Jamila, “our Moroccan slave.” Pierre, we know, has been coping with the fallout of the accident at the construction site. The family of the accident’s victim has been demanding restitution, and when Pierre visits their home (they seem to be Eastern European immigrants), he’s assaulted by one of the victim’s relatives—an altercation that is shot stunningly from afar. Afterward Pierre is undone by a sense of guilt that his mother and his uncle repress.
The conclusion of Happy End is too neat, too intricately determined by foregoing events, to qualify as realism, but it still induces the grim sort of laughter that comes with witnessing contradictions laid bare. The reception for Anne’s wedding takes place on a sunny afternoon by the sea. The family is gathered around the table, surrounded by guests, a conspicuously white crowd. Pierre arrives late in the company of several black men, interrupts the proceedings, and begins to introduce the men to the crowd. The first is a migrant whose family was killed by Boko Haram. Anne quickly breaks up this stunt, and Pierre is dragged out of the room. The men are offered a table and a meal, and Anne apologizes to her guests, explaining that Pierre really is “a good person.”
The scene—absurdly, sinisterly—mimics the paralysis of liberals in the face of Europe’s migrant crisis. The guests are here: Will they be recognized? Will that recognition be more than a token? Will they be offered a seat at the table and a share in the continent’s comforts, or will they be shuttled out of sight? Anne’s response to the scene is to restore order and accommodate disruption. Georges’s is altogether different: He enlists Eve to wheel him away from the luncheon and steer his wheelchair down a hill and into the sea. We see this last suicide attempt from two points of view: first head-on, as if from the water, as granddaughter drives grandfather toward oblivion; then, after Eve lets go of the wheelchair, from the vantage of her smartphone’s camera, as Anne and Thomas run down the hill to try to save their father. Her act of recording what might be her grandfather’s last moments comes to her completely naturally, a perverse storytelling impulse that mirrors Haneke’s own. You could also call it a sign that the next generation will bear witness to their mothers’ and fathers’ sins and their suffering.
Ajit Pai thinks net neutrality held us all back. It stifled innovation, he said. It deprived consumers of choices, starved rural communities of broadband, and even threatened the survival of the internet itself. Pai, who lobbied for Verizon before Barack Obama appointed him to the Federal Communications Commission, has prioritized the repeal of Obama-era net neutrality rules since becoming its chairman under President Donald Trump. Considering Pai’s background in the telecommunications industry, which has long wanted more control over how it delivers internet content, it makes sense that he has targeted the rules classifying internet service providers as Title II carriers, which subjected them to stricter regulation and prohibited providers from creating “fast lanes” that users could access for a fee. On Thursday afternoon, the FCC voted 3-2 to repeal the rules, and Pai killed his dragon.
“You can still drive memes right into the ground,” he joked in a Daily Caller video released ahead of the FCC vote. Pai’s jocularity, however, hides a more sinister truth: His war on net neutrality is an extension of the Trump administration’s war against the working class. His justifications are the same justifications put forward in any defense of deregulation, whether it’s in the financial industry or the oil and gas sector. The result is the same: More profits for companies, with dubious benefits for consumers. Corporations exist to make money, and making the internet as cheap and accessible as possible is not in their financial interests. They certainly are not in the business of making sure the people with the least disposable income, or who live the farthest away, can get internet access.
In conversations with The New Republic, net neutrality advocates expressed deep concern about the FCC’s vote. “You can’t even get a job at Walmart without having reliable internet access, let alone do a complicated homework assignment or find a great health care plan,” said Hannah Sassaman, policy director for the Media Mobilizing Project. “All of the basic pieces of dignity that most of us who have reliable and affordable internet take for granted go completely out the window without net neutrality.”
The impact on rural communities could be particularly severe. The FCC itself found in 2016 that 39 percent of rural Americans lack access to internet speeds of at least 25 Mbps. That figure increases to 41 percent for Americans living in tribal territory and it’s higher still—68 percent—for anyone living in rural tribal territory. “Without net neutrality rural residents will experience another form of systemic oppression and discrimination, I just know it,” said Dr. Jacquelyn Bragg, coordinator of the Community Economic Development Network of East Tennessee. “It is critically essential to economic development.”
Marty Newell, who coordinates rural broadband policies for the Center for Rural Strategies, rebuts one of Pai’s favorite arguments for repeal: that net neutrality made it more difficult to bring broadband access to rural communities. “There really is no data to show that net neutrality has had a significant impact, either for or against investment in broadband largely, or in rural places specifically. The numbers just don’t hold up when you look across the whole country,” Newell said, explaining that a “significant number” of rural communities can only access satellite telecommunications infrastructure, while some have only one provider. He added, “What our experience has been is the big cable providers only come to rural areas when someone holds their feet to the fire. The economic case is not as good as it is for more densely populated places.”
Rural people need broadband more than large providers need rural people. There simply aren’t a lot of potential customers located in isolated, often impoverished, communities.
Urban areas are better off, but accessibility gaps exist there too. “Poor and working people, particularly people of color, have a far, far more disproportionate lack of access to the internet than white communities,” Sassaman explained. “In Philadelphia, we have the third-worst internet access of any big city in the United States, behind Detroit and Memphis, both of which are cities that are also predominantly communities of color.”
The FCC’s deregulation push could, she added, force working class people to pay for internet access twice: once for basic access, and again for equitable access. That’s a direct challenge to social mobility. Not only would that complicate the process of applying for jobs or for college admission, it would also put an obstacle in the path of people trying to start their own businesses. The entrepreneurial spirit the GOP loves may not survive its deregulation campaign.
As the tech industry marches on, the internet will become entwined ever more deeply in our collective lives. It is a public space. It’s a place of protest; a market; even a university stand-in, for some. A stratification of the internet is a stratification of society, and the Obama administration’s decision to regulate the internet more like a public utility acknowledged that basic truth and sought to prevent its realization. (Though Obama was required to appoint a Republican commissioner, his decision to pick Pai in particular remains a mystery with damaging consequences.)
But with the Trump administration, stratification is its raison d’etre. From the tax reform bill winding its way through Congress, to the repeated attempts to repeal Obamacare, to the gutting of the Consumer Financial Protection Bureau, we see again and again policies that enrich the wealthy and lock the poor into an inferior class. The internet is simply the next frontier for it to conquer.
When I saw the words “The Silence Breakers” splashed across TIME’s Person of the Year issue last week, a began playing on a loop in my head. It was pulled from a scene of television from 2013, one that often flickers in the back of my mind. I saw Laura Dern as Amy Jellicoe, scowling in a sunny board room in the finale of Enlightened. The scene is one of divine retribution. Amy sits, seething in a denim jacket, across from the high brass at Abadonn Industries, a fictional Southern California pharmaceutical conglomerate where the character toiled for 15 years. The suits have just learned that Amy is a whistleblower who leaked incriminating emails to the Los Angeles Times. The head honcho glowers across the lacquered conference table, violence in his eyes. “Who are you?” he growls, by which he means what gives you, a woman far down the corporate ladder, the right to destroy my life?
Dern, whose elastic face is one of Hollywood’s great instruments, frowns wearily. “I’m just a woman who is over it,” she sighs. “I’m tired of watching the world fall apart because of guys like you.” As she leaves the meeting, the CEO flies into a misogynistic rage, spittle rocketing from his mouth. He hisses and turns red, telling her he will crush her, calling her a “psychotic fucking cunt.” After the elevator doors close and Amy is safe from any physical threat, she tries to suppress a nervous grin. Her life may have just exploded, but she won’t be the one going to jail.
This scene aired only shortly before HBO announced that Enlightened was cancelled. Mike White, the show’s creator, and Laura Dern—who became an executive producer—had planned out a future trajectory for Amy to follow in season three, but it was never made. The cancellation was a blow to critics and to a small but loyal cadre of evangelist fans, who crowed about the show’s merits so fiercely that by the time the show ended, it had a near-perfect Metacritic score. And yet, Mike White has said that the verdict did not come as much of a surprise. He glazed the finale in hopefulness for Amy, just in case this was her swan song. The network had already given the show one lifeline, greenlighting season two despite dismal numbers, some of the worst on cable. Streaming television hadn’t really taken off yet. (House of Cards premiered right as Enlightened was dying.) The show was one of the last great cable gems to wither on this cusp.
I have been thinking about Enlightened a lot in 2017. I recently re-watched the entire series and it felt as fresh as anything made this year. It is not that the show was ahead of its time—more that it now reads like a vital warning, a visual message in a bottle washing up on this anxious shore of a year. When once I looked at Amy’s new-age crusades with a mixture of cringey annoyance and pity, I now look on her journey through the series with a newfound tenderness and curiosity. This is not to say that I think the future is going to be made by an army of Amys—she is, after all, a white woman of privilege whose solipsistic search for meaning causes her to steamroller everyone else in her path. But I am seeing nuances in her story that I never did before, and new ways in which her dogged pursuit of justice in the workplace feels especially vital.
Earlier this week, New York magazine columnist Rebecca Traister published entitled “This Moment Isn’t (Just) About Sex. It’s Really About Work” arguing that now, in the post-Weinstein reckoning, we must turn our attentions toward systemic workplace harassment rather than focusing solely on sexual assault. She writes that while the #MeToo moment is a powerful watershed, the important conversations we should be having are not simply about sex, and who gets to hug whom by a watercooler, and whether or not it is okay to marry your boss. “I am just as worried about what we will not do,” she concludes. “The thing that is harder and more uncomfortable and ultimately inconceivable: addressing and beginning to dismantle men’s unjustly disproportionate claim to every kind of power in the public and professional world.”
Like Amy, plenty of women now are over it and ready to demand change in the workplace. That final fearsome scene shows just how much is at stake when women try to move the needle. I keep watching it, not as a clear template for action, but as a parable about what can happen when a woman decides she is finally done.
When Enlightened debuted, no one knew quite what to make of it. Amy Jellicoe was as unlikable as heroines got on TV in 2011: a high-strung, self-obsessed do-gooder who exudes weaponized optimism. Emily Nussbaum of The New Yorker coined a to classify this new type of screen heroine: “hummingbird.” Leslie Knope was a hummingbird. So was Tracy Flick. So was Carrie Mathison, Claire Danes’s ever-frustrating CIA agent on Homeland. Hummingbird women are obnoxious. They never seem to know when to end a conversation. They are always in trouble with somebody. They always seem to live inside a vortex of personal drama. In one episode from Enlightened’s first season, Amy’s mother—played by Dern’s real-life mother, Diane Ladd—muses about how when Amy was younger, she would cycle through an endless roulette of close friends. “You’d get a fix on a girl, and then you’d have a falling out, and all the tears. Then, a new girl would show up. And she was your best friend. So involved.” Hummingbirds are always so involved.
It is possible that Enlightened failed to find traction with audiences, because no one knew exactly how to love a character like Amy Jellicoe. She was layered, but no one layer provided a steady foothold. She starts the series by returning to work after a nervous breakdown that was the result of sleeping with her boss. Immediately she starts to apply a shellac of self-help moonshine to every corner of her life. She meditates, she does yoga, she reads books about flowing through your rage. She sermonizes to her reticent mother, as well as ex-husband Levi (Luke Wilson), who has become a drug addict, alcoholic and daytime sleeper. When she goes back to work at Abadonn, in an aggressively peppy yellow sundress, she is relegated to a basement office with the company’s other “unplaceables,” doing data-entry. The company wanted to fire her, but fearing a lawsuit, they merely place her in the corporate equivalent of hell. From one point of view, Enlightened is an extended horror movie about human resources.
It is at work where viewers are meant to feel the most sympathy for Amy. Ousted from her old department because she had a mascara-soaked meltdown after an affair with her superior, she is punished for trying to return to the fold. In the basement, she meets her depressive desk-mate Tyler (Mike White) and Dougie (Timm Sharp), her stoner creep of a manager. His first words to her are, “Damn girl, you’re tall as shit.” When she says she isn’t good with numbers, he says, “Hey, tall and blonde and lovely, don’t need to be good with numbers.”
Both of these men, pathetic and comical though they may be, sexually harass Amy in the first season. Tyler tries to kiss her after a late-night working session, misreading her cheery congeniality as a flirtation. He is humiliated, and the two eventually reconcile, though not before he calls her a “fucking dumbshit” in a room full of her coworkers. Dougie’s transgression is more extreme; when the entire office goes to a club to celebrate his promotion, he drinks to excess and then gropes Amy on the dance-floor. Horrified, Amy tells a colleague who is interested in Dougie not to date him. In the next episode, Dougie takes his revenge, attempting to get Amy fired because she thwarted his game. “You cockblocked him,” Tyler tells her, as she is on her way up to meet with HR. Amy reports the harassment and keeps her job, but it is not a clean victory. By the end of the season, she starts to imagine burning down the office, dousing the entire place in kerosene. Corporate America is sick, she says, and the only solution may be cleansing fire.
At the time, we were supposed to read Amy’s fantasies of a better, more sparkly world as satire. Mike White, who himself suffered a breakdown after developing a show for network television has said that he returned from therapy with hope for the world, but also with the knowledge that much of corporate life leaves behind a sour taste. No one person can endure under the crushing weight of the system, even a person with Amy’s relentless desire to reach a higher plane. When I watch Enlightened now, I see that it is a show, above all else, about the systems that keep us down. Even the despicable Dougie is trapped: underground, in a literal glass bubble of an office, in his need to affirm his masculinity with come-ons. In season two, he’s fired. “This whole time I was working here, I thought I had some power,” he says, reading a memo detailing his dismissal. “I thought I was somebody.”
If we cannot root for Amy at the beginning of the show, it is because something about her flapping her wings in the face of such an oppressive system feels slightly deranged. No one wants to see a hummingbird slam into a glass door over and over. Amy’s ideals are grand, but her execution is always clumsy and sometimes cruel, and it is painful to watch her scrape her knees over and over on the sidewalk of good intentions. Now, when I watch, she seems more heroic. She sees more than most, and suffers more as a result. She is also petty, resentful, and selfish, believing that she is owed a pathway out of the bad system, but rarely concentrating more widely on the freedom of others. When she is offered a job at a homeless shelter, she balks at the low salary. Her yen to heal the world often ends at her own front door.
In season two, when Amy hatches the plot to work with a muck-racking journalist named Jeff Flender (Dermot Mulroney) at the LA Times, Enlightened becomes a heist story. It is in this season, which I think is far better than the first, that Amy becomes a true hero, and one that I keep thinking about in light of the revelations that have emerged in recent months. She is mad as hell, and finds a way to expose the company to the outside world. In the last moments before the LA Times piece runs, Amy gets cold feet, lured by the CEO’s offer of a lucrative corporate watchdog position that would enable her to advocate for change from within. Still, she realizes that when it comes to injustice on a systemic level, change rarely comes from inside the house. We have learned, in exposé after exposé, about how many layers of protection Harvey Weinstein had: lawyers, talent agents, investors, journalists, an entire “” set up to launder his abuses. It was only when women put themselves on the line and told their stories that the gears started to move. As Jeff tells Amy, “the zombies will wake” only when the story hits.
Enlightened ends on a melancholic note. Amy has become the agent of change she always wanted to be. She is also out of a job. We never get to learn about the lasting consequences of her actions. The CEO is clearly toast. Perhaps Abadonn goes the way of Enron. Perhaps a new leader, who claims to have cleansed the company of its negativity in a pat statement, begins the cycle anew.
I used to take the cynical view. I thought, clearly, Amy’s bravery didn’t move the needle much. But now, at the end of 2017, I am of two minds. We live in a more anxious time than ever. Many of us have become unwitting hummingbirds, tweeting as fast as we can. We want so much for the world and for each other. As abusers’ deeds come to light, and we reckon with the treatment of women at work, there are glimmers of hope that we might see real change. As Traister and others have written, we must keep talking about the larger systems. In an essay called “The Unsexy Truth About Harassment” in The New York Review of Books, Melissa Gira-Grant wrote that “What is powerful about this moment, what is threatening, is that in place of women’s refusals, there are not only demands, but desire for a world in which sex, work, and power are not ruled by false notions of virtue and victimhood.”
It was only after Amy decided, at long last, that she was over it—her enlightenment would come not from self-improvement, but from using her resources to expose cracks in the system—that she was able to be powerful. If watching Enlightened in 2017 has taught me anything, it is that we must start there.
Throughout the last presidential campaign, no Republican hated Donald Trump more than Lindsey Graham did—or so The Washington Post declared, with the evidence to back it up. The South Carolina senator frequently warned of the dangers of nominating Trump, calling him a “jackass” and “a race-baiting, xenophobic religious bigot.” “You know how you make America great again?” Graham, then a presidential candidate himself, said in December 2015. “Tell Donald Trump to go to hell.” After quitting the race, he continued to warn of the dangers of nominating Trump:
Graham’s prediction didn’t pan out that November, when Trump eked out a victory over Hillary Clinton and the Republicans won unified control of the government. But the recent Democratic victories in Virginia and Alabama suggest that Graham’s forecast was clear-sighted about Trump’s impact on the Republican Party. He’s a historically unpopular president, and he’s pulling the GOP down with him.
Paradoxically, as the extent of Trump’s political toxicity becomes clear, Graham himself has transformed into one of the president’s biggest supporters. This trajectory from foe to friend illustrates the problem the Republicans face. As the president and standard-bearer of the party, Trump possesses power that Republicans covet, and they want to influence him. This has led Republican politicians to consolidate around Trump, whatever their previous criticisms and personal concerns were. The oft-predicted Republican crack-up hasn’t happened. Rather, the party is contracting into a personality cult, one whose fate is tied to a much-hated president. As Trump’s popularity declines, so do his party’s electoral chances.
“The victory Tuesday by Democrat Doug Jones to represent that heavily conservative state [of Alabama] in the Senate,” the Post reported, “was the latest example in a string of elections this year that Democratic leaders think represent a growing backlash against President Trump—and a potential building wave for 2018.” What was unthinkable a few months ago is now a serious question: Can Democrats win back not only the House of Representatives next year, but the Senate, too? “I worry that the Senate is in play. I didn’t think that before yesterday,” Alex Conant, a Republican strategist, told Politico. “If the political environment is still like this in 11 months, Democrats might be able to defend their incumbents and pick up the seats they need out west.”
There’s no question that the political landscape has become much more favorable to Democrats, yet the true lesson of these recent elections is that the party can’t count on a Trump backlash to deliver a wave victory in 2018.
Republicans could write off the Alabama loss as an anomaly. After all, it’s hard to win with a candidate who’s credibly accused of child molestation. But Moore won the nomination, over establishment favorite Luther Strange, by harnessing the same anti-establishment anger that made Trump president. Moreover, as FiveThirtyEight’s Nate Silver notes, the shift in the vote from Trump’s landslide in 2016 to Moore’s narrow loss in 2017 wasn’t just caused by Moore’s personal flaws, but also shifts in party enthusiasm. In this year’s special elections, there has been an 18-point shift from Republicans to Democrats:
Alabama saw such a large swing partly because the Democratic Party made a more concerted effort to energize black voters, rather than take them for granted. As David Weigel and Eugene Scott noted in the Post, the Democratic National Committee used a “quiet strategy in Alabama” that included “a $1 million investment in millennial and black voter turnout that was not advertised until the election was won. That was just one of the efforts that paid off for Democrats in Alabama, where new third-party groups including Woke Vote and BlackPAC engaged in weeks of voter persuasion and targeted messages.” Democratic pollster Cornell Belcher told the Post that the party “didn’t just come in at the end and treat black voters like get-out-the-vote targets. They treated them like persuadable voters. They actually engaged on the top issue for African American voters, which is criminal justice reform. And they didn’t dance around the issue of police brutality.”
Some analysts have suggested that Democrats projected a more centrist image:
This analysis seems slightly off-base. Ralph Northam, the governor-elect of Virginia, and Jones, the senator-elect of Alabama, are hardly centrist in the context of their states. Both men ran as strong supporters of reproductive freedom. Jones took forthrightly liberal positions on health care (supporting the Affordable Care Act and calling health care a right) and LGBT rights. Northam campaigned on raising the minimum wage to $15. If Northam and Jones seem “anodyne,” that’s less because of their politics than their style. Both men are reassuringly bland, an appealing contrast to Republicans in the age of Trump and Moore. Perhaps the true lesson is that you can win as a strong liberal if your personal demeanor is unthreatening.
To the extent that the sexual assault accusations against Moore hurt him, that also has wider implications beyond Alabama. As New York magazine columnist Jonathan Chait notes, such scandals are likely to have a disparate impact on the two parties. While the Democrats are forcing out their alleged harassers, the Republicans are stuck being the party of Trump, himself an alleged predator of women. Moreover, men make up a significantly larger part of the Republican political leadership. Reports of harassment “will hurt the majority party much more,” Chait wrote. “The reason is that 2018 is shaping up as a wave election. In wave elections, the out-party usually loses very few seats. It is the in-party that loses. If Democrats are forced to step aside, they can easily be replaced. Republicans who have to step aside cannot. Incumbents pressured into retirement will open up seats that might otherwise not have had competitive races.”
Paul Maslin, who worked as a pollster for Jones, offered his insights to L.A. Times reporter Mark Z. Barabak. “The lesson is we need to keep being aggressive, fighting him everywhere,” Maslin said. “There’s no reason we can’t win Tennessee, there’s no reason we can’t win Arizona and Nevada. There’s no reason we can’t win congressional seats all over the place.” But Maslin also offered a cautionary note: “[W]e can’t simply be naysayers. We’ve lost credibility in the Midwest, in places like Pennsylvania. The Democratic Party is seen as being out of touch, elitist, without any good ideas on economic or pocketbook issues. We’re going to have to give people a sense we’ve turned the page and we’re not the same old same old.”
Maslin’s wise counsel suggests that Democrats need a two-pronged approach to riding the coming wave. First, they have to play up anti-Trump sentiment, which will help make many races competitive by energizing the Democratic base and appealing to disenchanted Trump voters (or at least convincing them to stay home). But Democrats also have to offer a positive agenda, especially to core voters such as people of color and women, and also provide the organizational resources to turn out those voters. Being reflexively anti-Trump will help Democrats win, but Trump won’t be on the ballot next year, and he won’t be around forever. For lasting success, the party will also have to out-hustle and outthink the Republicans.
Even after the Cold War, when the Soviet Union collapsed and the threat of nuclear destruction receded, the world’s nuclear states continued to follow the diplomatic and behavioral norms of that era. Years of inflammatory statements by North Korea and Russia, for example, were met with comparative calm from the United States, whose responses were carefully calibrated to deter aggression. Global leaders learned to look to the United States as a voice of nuclear reassurance.
Donald Trump’s inability or unwillingness to abide by these norms has revived the sense that Armageddon is once again within the realm of the possible. Tensions with North Korea have grown under his administration, and the danger of nuclear confrontation is now higher than at any time since the Cuban missile crisis. Part of this, of course, is due to North Korea’s pursuit of nuclear capabilities. But much of the blame lies with Trump himself.
Instead of working to de-escalate tensions, Trump has vowed to unleash “fire and fury” on the North Korean people and insulted Kim Jong Un, ridiculing him on the world stage as “Little Rocket Man.” Such reckless talk may simply be “Trump being Trump,” or part of his oft-stated strategy of “unpredictability” in negotiation. But predictability is essential to the stability that has protected us against nuclear annihilation for more than half a century. To North Korea’s leaders, Trump’s comments serve only to increase their sense of insecurity, fuel their desire to possess nuclear weapons, and decrease their hesitation about using them. This isn’t “winning”; it’s akin to lunacy.
Trump’s tweets and undisciplined remarks aren’t the only things setting the United States on a treacherous nuclear path, however. The Trump administration is currently preparing a new nuclear posture review, the government’s first since 2010. This comprehensive document outlines the nation’s nuclear capabilities and the conditions under which these weapons might be used. Current U.S. policy states that the president will consider using nuclear weapons only if another nuclear state launches a nuclear attack (or a massive conventional attack) against the United States or its allies—in other words, only in the most extreme circumstances. As the world’s last conventional superpower, the United States has every incentive to keep a fight from going nuclear, since such weaponry is the only counter to its conventional military dominance.
Trump, however, is reportedly pushing for the development of new nuclear weapons and delivery systems, and is expected to expand the circumstances under which they may be used. If this happens, the global nuclear balance may shift irrevocably. Rather than continuing the American role as the essential stabilizing force committed to preventing the proliferation of nuclear weapons, Trump could fuel a new nuclear arms race and increase the incentive for America’s enemies to strike first.
Trump’s expected overhaul of the country’s nuclear posture comes at a critical moment. The United States possesses the most fearsome nuclear arsenal on the planet: 4,000 weapons, 1,800 of which stand ready to strike anywhere in the world in 30 minutes or less. But these weapons, designed and built in the 1970s and 1980s, are old. The current “triad” of submarines, land-based missiles, and long-range bombers is being replaced so that the United States can maintain its nuclear readiness.
Barack Obama began modernizing the nuclear arsenal before he left office, an effort that will cost an estimated $1.24 trillion over the next three decades. It might seem surprising to some that Obama would call for such changes. In 2009, he pledged to seek “a world without nuclear weapons,” and in 2010 he negotiated an arms treaty with Russia that required both countries to reduce significantly the size of their nuclear arsenals. Certainly, the United States does not need so many nuclear weapons. The Joint Chiefs of Staff concluded in 2013 that the United States could safely reduce its nuclear arsenal by an additional one-third and still meet its military needs.
But Obama was also committed to ensuring that the United States can still fend off nuclear attacks and prevent other countries from pursuing their own nuclear programs. His policy attempted to strike a balance—restricting the role of nuclear weapons while maintaining America’s deterrence capability. Obama supported the goal of making the “sole purpose” of nuclear weapons to deter a nuclear attack, and also considered a “no first strike” rule—though ultimately he did not put such a limit in place.
Trump clearly has no affection for Obama’s pragmatic approach. “Why have them if we can’t use them?” Trump reportedly asked one of his foreign policy advisers during the campaign. Last summer, he also reportedly told military leaders that he wanted a tenfold increase in the nuclear stockpile, returning to the highest levels of the Cold War. And he is apparently considering the development of smaller, more “tactical” nuclear bombs.
Some nuclear strategists have speculated that the U.S. nuclear arsenal is so powerful that the United States would never use it. The loss of life and damage to the planet would be too great. They point to Russia’s threats to use nuclear weapons against NATO as a sign that Moscow does not view nuclear retaliation as credible. Smaller bombs, the thinking goes, would send a clear signal to Russia that its use of nuclear weapons would be met with an appropriate response.
There is no evidence that Russian officials think this way, however. Russia’s statements about using nuclear weapons should be viewed as tough talk to mask its military inferiority, rather than as a sign that it doubts America’s resolve. Building smaller bombs won’t keep other nuclear powers in check. But it could encourage the United States to use nuclear weapons, even if doing so is militarily and politically unnecessary. It would also give Russia a pass on its own threats of first use, instead of making it increasingly a pariah for its strategy of escalation.
Worse yet, unlike Obama, whose policies limited the circumstances under which nuclear weapons could be justified, Trump is likely to expand the rationale for nuclear war—in response to an attack on critical U.S. infrastructure, for example, or a large-scale cyberattack. And he has signaled that he may once again consider using nuclear weapons against nonnuclear states. All of this, combined with Trump’s bellicose, and at times unhinged, statements, ratchets up tensions around the world and makes it easier for other countries—Russia, Pakistan, and even North Korea—to threaten a nuclear response to nonnuclear scenarios.
Can Trump be contained, and if so, by whom? As tensions with North Korea have increased, lawmakers in Congress are considering legislation to restrict the president’s ability to order a nuclear first strike. One bill would require Congress to declare war before nuclear weapons can be used. Another currently in the works would make “no first use” the official policy of the United States. Senator Bob Corker has energized the debate with his comments that Trump’s recklessness could lead to “World War III” and that only the secretary of state, the secretary of defense, and the White House chief of staff stand between the United States and “chaos.”
In today’s polarized environment, however—in which only retiring lawmakers seem to say what everyone else is thinking about the dangers posed by Trump’s judgment and temperament—these bills are going nowhere. This is a shame, since it makes no sense for any president, Democrat or Republican, to have the ability to use nuclear weapons against a country that has not attacked the United States or its allies with such weapons first.
Even if Trump cannot be constrained politically, he may find himself hemmed in financially. The $1.24 trillion price tag to modernize the nuclear arsenal already exceeds initial estimates and is expected to keep growing. On its own, the nuclear bill is something the United States can afford, even if it is extravagant and increasingly unnecessary. But the financial demands come just as the bills for other major programs—the new F-35 fighter jet, in-air refueling tankers, and Navy combat ships—are coming due. Trump has just asked for billions more to expand missile defenses against North Korea—despite the evidence that America’s missile defense capabilities are much less reliable than most people believe. The government cannot possibly meet all of these financial obligations. Regardless of what Trump says he wants, priorities will have to be set and choices made.
Whatever proves to be the fiscal reality, Trump’s rhetoric is opening the door to a new era of instability. For 70 years, through the Cold War and after, nuclear powers acted according to a prescribed set of rules and codes of conduct. Trump has disrupted all of that—greatly increasing the chances of miscalculation or escalation. And the crisis will not end when Trump leaves the White House. Some of the new weapons and delivery systems that Trump wants will take years to develop, meaning that the next president will have even more options on the table—and more threats to face.
The United States has spent the last three decades free from the fear of nuclear annihilation, thanks in large part to a rational combination of threats, diplomacy, and reassurance. Trump’s rejection of these norms creates a new nuclear shadow that may linger long after he departs.
They say that there are decades where nothing happens; and weeks where decades happen; but sometimes there are 18 months where so much and so little happens that it is hard to tell whether events are static or in constant flux. This is the strange impasse at the heart of British politics today: a stalemate that manifests itself as chaos, a historic decision that has brought Britain to a dreary standstill, ever since the Brexit referendum on June 23, 2016.
Following feverish last-minute talks between U.K. and European Union leaders last week, the EU seems set to accept that Brexit negotiations can finally move onto their second phase, two months later than planned. The first phase consisted of establishing basic principles of Britain’s departure—above all, regarding Britain’s financial settlement (dubbed the “divorce bill”), the rights of EU citizens already living in Britain, and the border between Northern Ireland and the Republic of Ireland. The second phase will address a future U.K.-EU relationship—a far greater challenge, all agree, when fantasy will hit the reality of a fixed, inferior trade deal. “We all know that breaking up is hard,” European Council President Donald Tusk warned. “But breaking up and building a new relationship is much harder.”
That’s not to say phase one was easy. The Irish border posed a particularly acute problem. If Britain leaves the EU’s single market and customs union, as is Prime Minister Theresa May’s plan, a hard border will have to be reinstated between North and South, not only bringing back memories and tensions of the Troubles, but also negating Britain’s commitments to Ireland in older treaties. The solution the U.K. eventually settled on was to delay the decision. “In the absence of agreed solutions, the United Kingdom will maintain full alignment with those rules of the Internal Market and the Customs Union,” the 15-page document states in paragraph 49.
As ever, then, the certainty that Britain will leave the EU belies the utter uncertainty surrounding what that leaving will look like. During the referendum campaign, Switzerland, Norway, Canada, and even Australia were cited as examples Britain could emulate in different ways. Now Michel Barnier, the EU’s chief negotiator, has stated that, based on Britain’s priorities, Canada is the only realistic model, although comparisons have also been drawn with Ukraine’s and Turkey’s relationships with the EU (not what Brexiteers had in mind). Canada’s arrangement would certainly satisfy those who see nothing more exciting than brokering new free-trade deals around the world. But it would also restrict trade-access around services—a sector that accounts for almost 80 percent of the U.K.’s GDP and 40 percent of its exports. “What we want is a bespoke outcome,” David Davis, Britain’s Brexit secretary, explained on Sunday. “We’ll probably start with the best of Canada, the best of Japan, and the best of South Korea, and then add to that the bits that are missing, which is services. Canada plus plus plus would be one way of putting it.”
Brussels finds these fantasies of a pick-and-mix relationship frustrating. “Not everyone has yet well understood that there are points that are non-negotiable for the EU,” Barnier sighed. But whatever path the U.K. eventually chooses, the prospects for the national economy look bleak. On Tuesday, earlier this week, the Rand Corporation published a report that concluded that almost all possible trading relationships between the EU and the U.K. after Brexit will leave Britain worse off. The report adds that it may not be until 2031 before any of the potentially dynamic effects from leaving the EU are seen. During the referendum, the Leave campaign calculated that an extra £300 would be saved by the average U.K. household per year, once the U.K. stopped paying into the EU budget. Last month, a study by the London School of Economics found that the average household will already be paying at least an extra £400 in shopping annually, due to Brexit-induced inflation. Since that study, inflation has only gone up, reaching a near six-year high for November.
But for most Brexit-believers, the problem will never be Brexit, only ever that Brexit was carried out wrong. To this end, the morning after phase one was completed, the range of responses on the front pages of the newspapers was vast and revealing. “REJOICE!” cheered the Daily Mail on December 8, “WE’RE ON OUR WAY!” The Daily Telegraph adopted the stoic language of sacrifice, calling May’s compromise “The price of freedom.” The Daily Mirror was less impressed, labelling May “Mrs SOFTEE” and lamenting both the proposed cost of the divorce bill—at least £39 billion, but likely much more—and the revived possibility that Britain could stay in the single market (a so-called “soft Brexit”). The next day, the Sunday edition of the same newspaper could have come from a parallel universe: “After May’s triumph in Brussels, we’ve got the EU over the barrel,” the Sunday Mirror sang. (The story was apparently an “exclusive.”)
Parallel universes are standard fare in Brexit Britain. The reactions from leading Brexiteers—who see any expert criticism as a conspiracy—were no less varied. While many in May’s cabinet, including key figures in the Leave campaign like Boris Johnson and Michael Gove, made public displays of loyalty and praise, outside the party views were more hostile. The current leader of the U.K. Independence Party (UKIP) said May had “surrendered.” Aaron Banks, founder of the Leave.EU movement (and currently under investigation by the Electoral Commission), declared that “Theresa May has betrayed the country and the 17.4 million Leave voters.” The prospect of staying in the single market was beyond the pale: “Full regulatory alignment with the internal market and customs union? We may as well just bend over,” Banks said. Nigel Farage, perhaps Brexit’s most prominent cheerleader, felt similarly. He called the agreement a “humiliation” and a “capitulation”: “This isn’t what 17.4 million people voted for.”
Farage is right: This isn’t what 17.4 million people voted for. None of this is. The question posed in the referendum on June 26, 18 months ago, stated simply: “Should the United Kingdom remain a member of the European Union or leave the European Union?” Voters could then tick one of two boxes: “Remain a member of the European Union” or “Leave the European Union.” The former meant the status quo; the latter was whatever you wanted it to be: Norway, Switzerland, Canada plus plus plus. Surprise, surprise, the status quo didn’t win.
The toxic simplicity of the referendum question is still with us, begetting a situation where everyone wants something else, even—or especially—those who want the same thing: Brexit. Appeasing all the different desires is an especially hard task as Brexit’s fault lines cut across traditional party lines, forcing parties and politicians to hold multiple, incompatible positions simultaneously. Hence May’s empty aphorisms: “Brexit means Brexit” and “No deal is better than a bad deal.” And thus the latest addition to her collection: “Nothing is agreed until everything is agreed.” A common line in many treaties, in Britain it serves a specific purpose, postponing the anger of disappointed dreams. If “nothing is agreed until everything is agreed,” then nothing can be criticized until the final moment—because nothing has actually happened.
This is Brexit at its best: a world where everything is possible but nothing takes place. For the Brexit-brigade, the longer this liminal space exists the better. So after finally securing the “hard-won” arrangement of phase one, Davis, the Brexit secretary, then appeared on television to insist that it was “much more a statement of intent than it was a legally enforceable thing.” Government aides were also quoted in newspapers as telling Johnson and Gove that any agreement so far was “meaningless” and “not binding,” with “full alignment … not meaning anything in EU law.” Reading these reports, politicians in Ireland were furious: “Both Ireland and the EU will be holding the U.K. to the phase one agreement,” an official statement declared. Davis then claimed that he had been misunderstood: “Of course it’s legally enforceable … It’s more than legally enforceable.” EU officials and diplomats were equally dismayed by Davis’s remarks. “It’s not helpful if people cast everything into doubt 24 hours later,” one EU source said. But for the Conservative Party it is very helpful, even essential, to navigating this impossible, self-immolating challenge.
For now, however, the specter of Europe is haunting Britain, capturing all its attention and casting any other concern—and common sense—into the shadows. Most of Britain’s social problems precede Brexit, compounded by the Conservative Party’s enduring commitment to austerity. As Brexit becomes more and more demanding, however, these problems will only recede further from public view, growing worse as a result. On December 3, all four members of the Social Mobility Commission resigned in protest of the government’s failure to address stagnant life prospects across the country. The team was empowered by May at the beginning of her prime-ministership with her promise to focus on the “everyday injustices that ordinary working class families feel.” Now the commissioners have quit en masse because the government, they allege, does not have the “necessary bandwidth” to tackle other issues. A few weeks earlier, the commission had released its annual “State of the Nation” report, which found, among other depressing facts, that five out of six people in low-pay work in 2006 were still in the same work a decade later.
Brexit would be a hard enough task at the best of times—but then Brexit probably wouldn’t have happened at the best of times. It would be foolish to separate Britain’s dire domestic failures from the vote to leave the EU. What becomes clearer with every week is that it would be equally foolish to think that leaving will fix them.
The Post—Steven Spielberg’s movie about the Washington Post during the publication of the Pentagon Papers—is, like many of his movies, a David-and-Goliath story. The theme of a little guy taking on a mighty power—inevitably heartwarming and a bit corny—has dominated Spielberg’s work in some form since Duel, a 1971 television movie he made when he was 24. In Duel the Goliath is an 18-wheel tanker truck operated by a man in cowboy boots with a bad case of road rage. The David is a Plymouth sedan driven by a mild-mannered traveling salesman. The car’s driver survives, and the truck runs off a cliff and crashes like an enormous leaking corpse. The pattern persists: Just think of Indiana Jones deadpanning, “The Nazis, I hate these guys.” In The Post, Spielberg sets the Goliath of the Nixon administration against the publisher Katharine Graham (Meryl Streep) and editor Ben Bradlee (Tom Hanks).
If the story of a bullying president and an embattled press corps sounds familiar, that’s because Spielberg fast-tracked the script’s production last spring. Casting Meryl Streep and Tom Hanks, who have both been vocal critics of the Trump administration, in the lead roles is more than a little on the nose. The historical allegory is neat, and obviousness isn’t a flaw in a protest movie. But as a movie about journalism, The Post substitutes righteousness for suspense, and legal and financial distresses for the paranoid dread that marks the classics of the genre, which happen to have been made during and just after the Nixon administration.
The Post opens with the familiar Hollywood shorthand for the Vietnam War: American men in camouflage, a dirt clearing amid jungle vegetation, the sounds of helicopter propellers and Creedence Clearwater Revival. The novel element is a typewriter. It belongs to Daniel Ellsberg (Matthew Rhys), the RAND Corporation analyst employed by the Department of Defense to embed with the troops and contribute to its history of the Vietnam War. He would become the whistleblower, leaking thousands of pages of secret documents first to the New York Times and then to the Post. Ellsberg’s turn to dissent is handled with economy in The Post. Flying home from Vietnam, he witnesses Robert McNamara (Bruce Greenwood), defense secretary to Kennedy and Johnson, ranting about how the war is only getting worse for the U.S., and in the next scene watches him say the opposite to reporters.
After that prologue, the stress falls on Graham, Bradlee, and their newspaper, and The Post plays like a prequel to All the President’s Men. But the contrast between the two pictures is instructive. There’s a reason why most films about journalists focus on reporters, make editors into colorful supporting characters, and all but leave out publishers. There’s no Katherine Graham character in All the President’s Men, though Lauren Bacall, ex-wife of Jason Robards, who played Bradlee, was the filmmakers’ dream choice if the character hadn’t been written out of the script. It would have been a better movie for it, no doubt.
Directed by Alan J. Pakula, All the President’s Men is a classic of 1970s paranoid cinema, alongside Pakula’s The Parallax View and Sydney Pollack’s Three Days of the Condor (which, like All the President’s Men, starred Robert Redford). These films pit journalists—or in the case of Condor a low-level CIA analyst turned whistleblower—against the shadowy forces of the government or secret organizations that may be colluding with the government. The Post transfers the tensions from the reporters on the ground to management, who have to reckon with the potential collateral damage to their relationships to presidents, cabinet members and lawmakers. Things aren’t so suspenseful when the shadowy government official is the heroine’s frequent dinner guest. Will McNamara stop coming over for supper if Graham publishes the Pentagon Papers? This is one of the central moral conundrums in The Post, and it’s difficult to care.
The Post brass’s close social ties to powerful people showed through in its coverage. Despite its post-Watergate image as a crusading publication, in the Pentagon Papers era the Post often went out of its way to be sympathetic to power: When Bradlee discovered that his deceased sister-in-law Mary Meyer’s diary detailed her affair with John F. Kennedy, he handed the document over to James Jesus Angleton of the CIA rather than break the story in the Post. This was in keeping with the general reluctance to report on the personal lives of politicians that characterized the era, but the culture of deference went much deeper. The gap between the Pentagon Papers’ secret narrative of the war and the story the public knew showed how dependent the press was on the government’s official narrative. Blockbusters like Seymour Hersh’s scoop on the My Lai massacre were the exception to the rule.
Ellsberg and Post reporter Ben Bagdikian (Bob Odenkirk), share the film’s few paranoid scenes: hushed calls from payphones, a meeting in a motel. They both accept that they might go to jail and that exposing the truth about the Vietnam War would be worth it. Meanwhile, Bradlee barks and frets from his office: He wants to compete with the Times, but he’s running from behind. While the Times was exposing state secrets, the Post put Nixon’s daughter Tricia’s wedding on the front page. (Getting access to that event is treated as a matter of grave importance, a sign of the paper’s lingering provinciality). Hanks plays Bradlee as a caricature of the charismatic crank Robards inhabited. It’s a supporting role thrust awkwardly to the film’s center.
The paper’s investors were another obstacle to defying the Nixon administration. The Post had just made an initial public offering when the Pentagon Papers were breaking news. The IPO was necessary to solidify the paper’s finances—more than once it’s mentioned that raising the share price by $3 would generate $3 million to hire 25 additional reporters (a figure that seems anachronistic, at best)—but it put the Graham family’s control of the Post at risk. The prospect of an investor revolt if Graham and Bradlee are sentenced to jail time for publishing the leaked documents leads to The Post’s moment of truth. Graham decides the paper’s mission as expressed in the IPO document outweighs all else, and journalism wins out over capitalism. The Supreme Court decides in favor of the Times and the Post, and the Graham family maintains control of the Post (as they would until 2013, when they sold to Jeff Bezos for $250 million).
Despite these unheroic conflicts of interest, the film portrays this period as a golden era for journalism. In many ways it was a golden age, but the real heroes were the whistleblowers and reporters, and that’s no slight to the real risks editors and publishers like Bradlee and Graham took. The images Spielberg deploys to show journalism in action induce nostalgia for the pre-digital age: photocopies of 4,000 pages of the Pentagon Papers spread in dozens of piles on the floor of Bradlee’s home library, being sorted seemingly at random by a few reporters; a gorgeous giant printing press with fresh editions criss-crossing on conveyor belts from floor to ceiling. The film is animated by a sense of yearning for a time when America could count on its patrician class to act in the country’s interest in the name of the Constitution. The family model of media ownership is still with us, even if the Xerox machine and the printing press seem to be on the way out, though now the families are named Murdoch and Koch, not to mention Bezos.
A pair of recent HBO documentaries set the table for The Post, offering hagiographic portraits of Bradlee and Spielberg. The Newspaperman, adapted from Bradlee’s memoir A Good Life and narrated in his voice, spends 13 minutes on Bradlee’s friendship with John F. Kennedy, and nine minutes on the Pentagon Papers. “To me,” Bradlee says, “failure to publish without a fight would brand the Post forever as an establishment tool of whatever administration was in power and end the Bradlee era before it got off the ground.” Indeed. Neither the documentary nor The Post ever ask why Bradlee and Graham didn’t have the real story of the Vietnam War before Ellsberg handed to them. They were, after all, routinely dining and vacationing with the men who made the decisions and kept the secrets.
Steven Spielberg is sentimental about the Washington Post—as he had tended to sentimentalize institutions and events throughout his career. Tom Stoppard touches on this in Susan Lacy’s Spielberg, describing the 1987 movie Empire of the Sun as “a truly great film but for me ultimately it shaded into an unnecessary softness or sentimentality. I don’t know where it comes from. But he likes and enjoys sentiment. It’s part of him.”
The sentiment evoked in The Post is righteous opposition to Richard Nixon: an easy feeling to muster decades after his landslide victories. Nixon is glimpsed a few times in crooked silhouette through the windows of the White House, his real voice heard from archival tapes. The actual man is rendered a cartoon villain in an otherwise realist film.
“I really believe in this country,” Spielberg says in the documentary, “and I always have, and it’s just resonated throughout my work: wanting to tell American stories, wanting to tell stories about principled, ethical people, who against all advice and against most everyone’s better judgment just proceed to do the right thing. Now I’m sure that sounds like I’m this kind of, you know, idealist, or some sort of a patriot, but I am a patriot, and I’m somewhat of an idealist too.”
The Post is a slick drama that will bolster rattled audiences’ faith in the fourth estate’s ability to check government wrongs, but its patriotism and idealism have an air of self-congratulation, and that’s not the spirit that will flip the House or the Senate in 2018 or the White House in 2020.
“Doug Jones wants abortion to be allowed until the moment before birth,” one ad blared. “A vote for Doug Jones is a vote for more black abortions, no school choice, and higher taxes for job creators,” said another. The ads described a candidate who didn’t exist. Jones, who won last night’s special election to replace Jeff Sessions in the Senate, is no radical. His pro-choice views are in line with the Democratic mainstream, meaning that he supports the right to choose and current restrictions on late-term abortions. Conventional wisdom asserts that this position should have cost him last night’s race against Roy Moore.
Jones’s unlikely victory points back to his opponent. In any other year, Moore would have been an outlier, a freak. With Donald Trump in office, however, Moore is part of a pattern. The president’s early endorsement of the establishment apparatchik Luther Strange always seemed more like a forced concession than a demonstration of true feeling. In his flamboyant disregard for democratic norms, his insistence that mounting sexual misconduct allegations are “fake news,” and his reliance on the deep antipathy conservatives hold toward liberals, Moore resembled Trump. And like Trump, he scooped up most of the white votes in his election and a stunning 80 percent of white evangelical votes.
Hypocrisy? Maybe. But that is not how white evangelicals understand it. Most of Roy Moore’s voters didn’t think they supported an accused pedophile; they simply didn’t believe the allegations that he molested and preyed on teenage girls when he was in his thirties. They decided it was a cruel myth, invented by D.C. wheelers-and-dealers to destroy a godly man. A strong thread connects this delusion to Moore’s anti-abortion rhetoric: If someone would kill an unborn baby, can you really believe anything they say?
The pro-life movement now finds itself inside a trap that it built. Its claim to moral superiority rests on the totalizing depravity of the opposition. Without a foil, pro-lifers must depend instead on the strength of their arguments, and the idea that ending a pregnancy is equivalent to taking the life of a child or adult has not convinced most Americans. And now it’s becoming more difficult than ever to uphold this dichotomy between the righteous and the fallen, and to pretend that pro-life candidates belong firmly to the former camp.
“No matter the outcome of today’s special election in Alabama for a coveted U.S. Senate seat, there is already one loser: Christian faith,” Christianity Today editorialized hours before voters rejected Moore. “When it comes to either matters of life and death or personal commitments of the human heart, no one will believe a word we say, perhaps for a generation.” This is an eloquent half-truth. The loser isn’t the Christian faith but political evangelicalism, doomed by a marriage it made decades ago.
White evangelicals may genuinely believe abortion is murder, but they also believe that a failure to agree with them indicates some deep spiritual brokenness. This is a kind of political presuppositionalism—the idea that a Christian can’t reason with a non-Christian since only Christians are capable of reason—and it is reaching the end of a useful life. It owes its demise to the broken spiritual state of the Republican Party, exemplified not only by Roy Moore and Donald Trump but also by the supply-siders and law-and-order fetishists.
On Tuesday, Jones flipped 12 counties that voted for Trump last year. In one of the night’s most dramatic results, Tuscaloosa County shifted 36 points from Trump to Jones. It is the state’s Black Belt, however, that really deserves credit for Jones’s win. It’s home to a high concentration of black voters, and they turned out in spectacular numbers to reject Moore. Considered together, these results disprove one of the oldest canards about Southern politics: that you need to run pro-life candidates to win. Jones’s upset is the latest data point in an emerging trend. Democrats flipped a number of historically conservative districts in Virginia to give Republicans a real fight for control of the House of Delegates. They have flipped other red state seats in Oklahoma and Georgia.
This change in conventional wisdom couldn’t come at a more crucial time. Pro-life politics didn’t take root with a wave of Jerry Falwell’s hand. The Christian right invested money and decades of skilled grassroots work to turn the pro-life label into a form of tribal affiliation. It changed minds. The Democratic Party’s best chance to secure its success for the long term, then, is to do the same. It must not cede ground so willingly on abortion, as if it is embarrassed of its own position.
In Alabama, the white voters Jones did convert have maybe learned something black voters already understood. Pro-life politics don’t guarantee moral behavior. A politician like Roy Moore can talk about saving babies one moment and lie about being banned from malls the next. Black voters in Alabama confronted decades of Republican-led voter suppression efforts to vote for Doug Jones last night. The poverty they disproportionately experience is the deliberate creation of a Republican Party determined to block any expansion of the welfare state. Black Alabamians have a very good idea of what white evangelical morality means.
Jones replaces Attorney General Jeff Sessions, a pro-life Republican who is as hostile to affirmative action as he is amenable to expanding the authority of American police. Abortion stops a beating heart, the Moore campaign warned—but so do police. Yet, American conservatives are still likelier to protest abortion than the killings of Trayvon Martin and Tamir Rice.
The lie of the “pro-life” label—with all the moral superiority it implies—is one that many white voters are about to learn. The GOP-controlled Congress continues its attempts to undermine the country’s health care system and funnel more money to the wealthy. By insisting on pro-choice candidates, Democrats have a chance not only to win seats, but also to assert the moral claims of being part of the pro-choice camp. “It’s time for them to get off their ass and start making life better for black folks and people who are poor,” Charles Barkley told CNN on Tuesday night. “They’ve always had our votes, and they have abused our votes, and this is a wake-up call. We’ve got them in a great position now, but this is a wake-up call for Democrats to do better for black people and poor white people.” It can’t do that by conceding abortion to the party of Trump.
“You know who’s going to get hurt by this?” a member of Congress asked me recently, referring to the about-time uprising of women against predatory men. “Women.” He explained that male members of Congress are now going to be reluctant to hire a woman when they have the option of hiring a man for a job, and that very attractive women would be particularly at a disadvantage in obtaining jobs on Capitol Hill. (Buxom is out.) Self-protection, in other words, might well lead to a new form of discrimination. And this could travel beyond elected politicians, though they’re feeling especially worried now.
Members of Congress have been speaking uneasily among themselves ever since Al Franken was drummed out of the Senate by many of his Democratic colleagues in early December. Nobody wants to talk about it on the record, but politicians in both parties and in both chambers remain disturbed by how Franken was dealt with by some of his Senate colleagues. In particular, a number of Senate Democrats were bothered by how Franken was treated, as was a large but unmeasurable portion of citizens. And some of the unfortunate implications are already becoming clear.
The whole thing happened with startling speed—no deliberations, no process, and no pause for thought, it seemed. The main actors against him got increasingly worked up—and they struck at the first opportunity. The entire episode, from when the first complaint about Franken was aired to when he announced unhappily that he’d leave the Senate, took three weeks; his self-appointed prosecutors turned on a dime, at first supporting and then throwing process (consideration by the Senate ethics committee) to the wind. There wasn’t even a meeting of the party caucus to deliberate and discuss. (Male Democratic senators with misgivings didn’t want to get in the way of the women.) A group of Democratic women senators got up a head of steam; its ringleader, Senator Kirsten Gillibrand of New York, declared, a doctrine of “zero tolerance.” “Enough is enough!” became not just an expression of exasperation but a policy.
With this precedent members of Congress (and others as well) became vulnerable to the acts of people not of good will. What is the protection against someone or several people deciding to gang up on a member of Congress by inventing incidents?
What’s particularly disturbing about the Franken affair is that a senator was driven from the seat he was elected to because he’d become inconvenient. The death knell came with the seventh—or was it the eighth?—complaint about Franken touching or patting or whatever some woman’s bottom, or in one case (following the original charge of his forcing his tongue down the complainer’s throat) asking for a kiss. Almost all of these charges were of actions before he came to the Senate and several were anonymous. But it was less these acts—immature and jerky, to be sure—that threatened to overturn the verdict of the voters of Minnesota, than the fact that the charges were continuing to be brought. (An option would be to demand good behavior or else, and leave it to the next election.)
What was the inconvenience caused the Democrats by the sudden spout of complaints about Franken? Well, you see, the Democrats—Senate Minority Leader Chuck Schumer weighed in, probably sounding Franken’s doom—didn’t want to have to answer the “what-about” question when they attacked the Alabama Senate candidate Roy Moore for the documented charges against him of pedophilia or when they attempted a new assault on Donald Trump’s predatory behavior toward women in the past.
Wait. The voters knew about Trump’s aggressiveness toward women when they elected him president. (True, a majority didn’t vote for him, not by a long shot, but are we going to get into the fine points of the Electoral College’s—or Russia’s—role here? Trump won the election.) They’d heard the Access Hollywood tape. The recent calls for Trump to resign over his sexual exploits strike me as pointless. (But of course they yield good publicity.) Obviously, he’s not going to, and Franken’s having been discarded wouldn’t make it even a teensy bit more likely. A senator is forced out of his seat for a talking point?
Politics usually proceeds on the basis of mixed motives. We can’t know which of the senators pushing Franken to go was doing so to get him out of the way of their own ambitions. He’d begun to have around him that hazy presidential talk that also hovers over at least 20 other Democrats. A number of commentators have said that while shoving Franken aside was, well, unfortunate, it was excellent politics for the Democrats to arrange to have a “clean slate” when it came to the matter of sexual hijinks.
Well, at what cost? Is almost any sexual infraction subject to, in effect, capital punishment—the loss of a seat in the Congress? Have Senator Gillibrand and some of her allies thought through what “zero tolerance” means? If loss of a seat over one infraction is considered too dire—if such lenience were to occur—then how many gaucheries would be sufficient to drive an elected official from office? Does it matter what they were? Are consensual affairs to be permitted? How consensual is consensual when the man is the woman’s employer? How are such decisions to be made? Is the punishment to be different if such reports come about at a time when one party isn’t trying to embarrass the other one? Or to decapitate its leader?
Would the same infractions that have already cost a senator his job be considered less serious then? These aren’t hypothetical questions. And they’re all about to become more difficult. Rumors are all over Washington, and are the subject of concerned conversations on Capitol Hill, that at least one major newspaper is planning an expose of a large number of randy, self-indulgent members of Congress—perhaps some 30 or 40 of them, it’s said. Capitol Hill has long and widely been known as a place where Eros and opportunity meet. Now that the subject is out in the open—will that lead to an exodus of lawmakers? The great correction is overdue; can we pull it off without short-circuiting democracy?
In 1985, Donald Shanor, a professor at the Columbia School of Journalism, published a slim book called Behind the Lines: The Private War Against Soviet Censorship. In it, Shanor outlined a problem that was emerging in Gorbachev’s USSR: It was becoming increasingly obvious that the Soviet Union needed to embrace new technologies in order to stay competitive among advanced nations. Yet, should its leaders do so, they risked losing control of a system of censorship that had been in place for more than 60 years.
That system was longstanding but not static. From 1922 until the collapse of the USSR in 1991, state-sponsored censorship varied from country to country, waxing and waning under different regimes. Over the years, it evolved from a Stalinist approach wherein all mentions of domestic food shortages, foreign accomplishments, or anything deemed “counter-revolutionary” were expunged; to the comparatively relaxed censorship of Khrushchev’s “thaw”; back to a hardline approach under Brezhnev; and finally to Gorbachev, who implemented the reforms that ultimately pulled down the Iron Curtain.
The relaxation of the Krushchev years perhaps made this opening up inevitable. The circulation of underground goods had boomed in the wake of the thaw. Brezhnev’s crackdown in the 1960s and 1970s had the opposite of its intended effect: It resulted in an explosion of homemade pamphlets, books, and audiotapes. Kruschchev’s reforms had meant that there were now more educated urban Soviets than ever before, and with the reduction of the working week from six days to five, they had leisure time at their disposal to read and share banned materials. This isn’t to say people were open about it—samizdat books and journals were often read at night, so as to be passed on in the mornings, and people were still being surveilled, jailed, and brutally interrogated for underground activities. Nevertheless, by the time Gorbachev came to power, veins of distribution were running throughout the USSR and the Eastern bloc, bringing foreign films, banned literature and American music to receptive Soviet citizens.
The transmission of Western culture into the communist East is the broad subject of three offbeat documentaries that have come out in the recent years: Disco and Atomic War (2009), Chuck Norris vs. Communism (2015), and now X-Ray Audio, a short documentary currently on the festival circuit. While the world Svetlana Alexievich chronicled in Secondhand Time, her oral history of “the last of the Soviets,” seems distant, the samizdat that this era produced continues to surface. Pirated records, bootlegged videotapes, and photocopied novels still appear at flea markets in St. Petersburg and Tbilisi, fuzzy replicas of Western originals. Their sheer prevalence is a testament to the demand for them at one time, and the ingenuity involved in creating them.
By the end of the 1980s, the challenge of staying competitive had acquired a twist: When capitalism did arrive, the East’s younger inhabitants were better equipped to adapt, in part because they’d already seen what it looked like.
On June 24, 1987, cars from all over Estonia were jammed for miles on roads entering the northern capital of Tallinn.
People had driven for hours to watch Emmanuelle, a softcore French film that was screening that evening
on Finnish TV. Documentarian Jaak Kilmi recalls that the streets of Tallinn were
eerily quiet as everyone in the city prepared their antennas and stayed home to
tune in. It was an unprecedented glimpse into a uniquely Western phenomenon, and
the sheer fact of the screening was at the time unimaginable to many Soviet
Estonians, even if they regularly watched foreign TV. There were no Nielsen
numbers, of course, but Kilmi’s Disco and
Atomic War claims there was a better barometer to measure the film’s
popularity—nine months later the Estonian birth rate jumped to a record high.
It takes three and a half hours to get from Helsinki to Tallinn by ferry across the Gulf of Finland. The cities face each other across the water, and on a clear day it’s possible to see Finland from one of Tallinn’s medieval stone towers. Finland was never part of the USSR—though it went out of its way to maintain friendly relations with the Soviets during the Cold War—and it had access to Western TV, which it beamed across the gulf to its southern neighbor. This began in the 1950s, but during the ’70s and ’80s, Kilmi recalls that enterprising Finns and Estonians did brisk business shuttling microchips into Tallinn and fabricating “Finnish blocks” that could be installed in Soviet TVs as a way to circumvent censorship. This was the era of Nightrider and disco, of Star Wars and Dallas, the plots of which were recounted in letters to relatives in other parts of the country. While the shelves of Estonian markets were empty, Finnish TV carried ads of a portly chef prodding succulent cuts of veal and lamb.
As more and more makeshift copper antennas sprouted on Estonian roofs, Party officials made halfhearted attempts to stop the transmissions, arresting violators and occasionally threatening to build a giant net in the ocean to jam the signal. After an Estonian engineer figured out how to use mercury to improve TV antennae, officials blamed the run on thermometers on an invented flu outbreak. By the mid-’80s, efforts to police TV in Estonia were given up altogether. This was also tactical, Kilmi claims: “Later studies have shown that for 20 years Soviet Estonia was a secret laboratory for the KGB to research what would happen to the Soviet citizen in the flood of hostile propaganda.” In other words, the Soviet Union could have easily jammed Finnish TV all along.
Several hundred miles south, in Romania, censorship was far more severe under the Eastern Bloc dictatorship of Nicolae Ceausescu. Ceausescu ruled the county through a cult of personality, tightly controlling the national TV station and newspaper, and overseeing a sprawling network of secret police. Even so, by the 1980s, Romania had developed a dynamic bootleg movie culture. Dubbed videocassettes of Western movies were trafficked across the country, and ticketed underground film screenings were held in family living rooms. Between 1985 and Ceausescu’s overthrow in 1989 (he and his wife were tried and executed on Christmas Day) over 3,000 films were dubbed by one woman, Irina Margareta Nistor, a translator for Romanian state TV. In Chuck Norris vs. Communism, filmmaker Ilinca Calugareanu traces the impact of Nistor’s work on generations of young Romanians, who came to identify her voice—the most famous in the country after Ceausescu himself—with Sylvester Stallone, Chuck Norris, and Woody Allen. From a dubbing studio in the basement of a Bucharest apartment, Nistor dubbed as many as four movies a day on the fly, which were illegally imported from Hungary by a shadowy businessman with links to the Stasi.
And then there was censorship in Russia itself. Stephen Coates got the idea for his documentary X-Ray Audio over a decade ago at a flea market in St. Petersburg. There he discovered a stall selling old X-rays that had been repurposed so that they could play music. Smugglers would carve grooves into discarded X-rays with recording lathes, and then cut them out so they could be played on record players. X-ray plates were the preferred material because they were both pliable and widely available at local hospitals. A black market quickly developed, and Soviets would buy banned records—which ranged from American jazz, mambo, and rock to Russian émigré musicians—from back alley dealers, the most famous of which was Leningrad’s Golden Dog Gang. On the X-Ray Audio project’s blog, Coates writes that the recordings “perhaps had a status something like that enjoyed by illegal drugs today—looked down on as being low culture but secretly enjoyed by a bohemian class.” In the early ’60s, the records became especially popular with Soviet hipsters known as stilyagi (the subject of yet another documentary) although owning them could lead to expulsion from the Communist Party.
From a distance of several decades, one could argue that the irony of these underground markets was that they established an intense demand for Beatles records and Solzhenitsyn novels that would later harden and be channeled under capitalism. It is perhaps a crueler irony that under communism, people had more time in which to make music and literature central to their lives. In Secondhand Time, several of Alexievich’s interviewees recall the literary culture that flourished under Communism—the zeal with which people read books, and the working hours that enabled them to do so while still being able to support their families. Censorship is an imperfect practice, and an effective way of stifling the flow of information is not just to cut off the supply of books, tapes, or records, but to deny people the resources—both time and money—they would need to consume them. In this way, Putin’s Russia, in which the wealthiest 10 percent of citizens control 87 percent of the country’s money, is increasingly catering to the needs of the state.
In the years after 1991, much of the physical infrastructure of the USSR was dismantled and sold for parts. As the ruble crashed, the black market boomed, and street vendors scandalized former Party apparatchiks by selling Soviet military medallions and uniforms. The transition to capitalism bankrupted millions and produced a handful of billionaires—a new oligarchic class, built on the back of the old power structure. Neither Chuck Norris nor Dallas catalyzed this collapse, but the way people had traded these cultural goods did give them a foretaste of how they might act under capitalism. While most of the samizdat materials were authored or composed by Soviet writers, Western media opened a window to an outside world, and helped reinforce the idea that the “homo Sovieticus” was already a thing of the past.
Once it became clear that Doug Jones had won an upset victory over Roy Moore in the Alabama senate race on Tuesday, the immediate question was: How would the president take the news? Donald Trump, after all, was deeply invested in the race to replace former Alabama Senator Jeff Sessions. Elevating Sessions to attorney general seemed like a safe move back in November of 2016. Alabama was a deep red state where, in the presidential election a week earlier, Trump had won 62 percent of the vote to Hillary Clinton’s 34; and in the 2014 Senate election, Sessions had won by a margin that even a communist dictator would admire: 97 percent of the vote.
But as it turned out, Alabama was a double loss for Trump. First, Alabama Republicans rejected Trump’s choice for the primary, Luther Strange, who had been appointed to Sessions’s seat in the interim. Instead, they went with a gleaming-eyed, fanatical Moore, a candidate so Trumpian that even Trump blanched at supporting. But, amazingly, even after credible allegations of child molestation surfaced against Moore, Trump decided that he would support the theocratic candidate, spending his political capital in a rally in neighboring Pensacola, Florida.
As Cornell Law School professor Josh Chafetz noted:
When the voters once again defied Trump, he (or more likely his staff) issued a surprisingly gracious tweet:
This unexpected civility is a form of whistling past the graveyard. Jones’s victory is a huge disaster for Trump for any number of reasons. First and foremost, it reveals that Trump vaunted skills at dominating the media landscape still leave him almost completely powerless in effecting politics and policy. Trump might commandeer the bully pulpit, but it’s not clear that anyone is really heeding his rants. He might even be realizing as much, for he tweeted on Wednesday morning:
Trump’s few areas of actual success, such as court nominations and potentially the tax bill, have involved capitulating decision-making to Republican leaders in Congress. In effect, Trump can only get anything done when he offloads his duties to people like House Speaker Paul Ryan or Senate Majority Leader Mitch McConnell. But Trump’s reliance on the congressional GOP looks riskier now that his Senate margin has shrunk to 51-49. Moreover, with Jones in the Senate, the Democrats have a much better chance of regaining the Senate in 2018. If they do so, they’ll be able to stop Trump’s nominations dead in the tracks, including his judicial nominations. They’ll also be able to investigate the president, aided possibly by a Democratic-controlled House of Representatives.
The upset in Alabama follows the Democratic Party’s victories in statewide elections in Virginia and strong showings in special elections that they narrowly lost in Georgia, Montana, and South Carolina. All these elections show a common voting pattern, with Democratic voters energized and Trump supporters not showing up in anywhere near the same numbers they did in 2016. The story of Alabama is a familiar one in the Trump era: The Republican base is demoralized and divided, while the Democratic base is mobilized and increasingly united.
As Mark Schmitt of the New America notes:
Trump is turning out to be a true disaster for the Republican Party, because his hardcore supporters are numerous enough to win primaries (as they did for both Moore and Trump), but can only be mobilized by a divisive politics that’s alienating a chunk of traditional Republicans while also animating the Democratic opposition. This can only lead political disaster for the Trump led GOP.
In the wake of Moore’s defeat, Republicans will be more divided than ever. Moore supporters like Steve Bannon, the Breitbart CEO and former Trump strategist, had prepared for such a result—and whom to blame for it.
The Republican Party is facing a nightmare 2018 scenario where Bannon-backed populist candidates disrupt the primaries, creating wounds that will make it difficult for the GOP to unify and win general elections.
There’s a final ominous fact about the Alabama election. Moore lost in large part because of the accusations of sexual assault against him. This is an indication of a sea change in American culture, one that bodes ill for a president who notoriously boasted that his celebrity allowed him to grope women with impunity. Seeing Tuesday night’s election results, Trump has every reason to worry that the social forces that took down Moore have the president in their crosshairs.
We are living in a golden age of rubbernecking entertainment. Even as audiences fracture according to hyperspecific tastes, it is not unusual for millions of people across the country to be puzzling out the same murder at once, dissecting clues from the real cold cases in twisty shows like HBO’s The Jinx and Netflix’s Making a Murderer. These series tend to be heavy on mystery but not on mourning; the victims feature only as the starting point for an investigation. The relatives of the departed rarely appear either, the ones who are left holding the torch and facing the bottomless vortex of questions after a crime. Stultifying grief doesn’t always make for compelling television.
In his new series, Wormwood, however, Errol Morris has made a bold choice: He has decided to make a crime documentary about a grieving son. Eric Olson has spent six decades searching for the truth about the death of his father, Frank, a military scientist who fell from a window of the Statler Hotel in New York City in 1953. Olson was nine years old at the time. His father’s death was instantly ruled a suicide, and in the years that followed, the family—Eric, his two siblings, and his mother, Alice—attempted to recalibrate to some semblance of normalcy despite having only murky details about their loss. But Eric could not seem to regain equilibrium. He had a sinking feeling that his father’s death was the product of malice and deep deception.
Then in 1975 the Rockefeller Commission Report revealed that Frank had been part of a top-secret CIA program that involved dosing scientists with LSD without their knowledge. Olson gained a bundle of CIA documents stating that his father had exhibited a negative reaction to the LSD, that he’d been brought to New York for psychiatric rehabilitation, and then leaped 13 stories from his hotel window despite his colleagues’ best efforts to recover his sanity. Despite a formal apology from Gerald Ford, Olson couldn’t stop digging. Being in the inner sanctum only made him more suspicious. He felt that the government’s contrition was too showy, a floral bouquet hiding a poisonous spider. He continued to hunt, to ask the needling questions for decades, losing his psychology practice, a woman he loved, and his grasp on reality along the way.
Olson does not feel entirely alone in his all-consuming grief. The show takes its name from a particularly bitter moment in Hamlet, when the bereaved prince whispers “wormwood, wormwood.” Obsessed by his father’s death, Hamlet slowly goes mad; no one close to him seems to care about it the way he does. The anger remains his own, the restless ghosts his alone to banish. This does not stop him from destroying everyone he loves in the process or from torpedoing his own life and many others’ because he cannot, and will not, stop pushing for a conclusion. Something similar, Olson says, has happened to him. At one point, he tells Morris, he even checked in to the exact room where his father died, just in case he might be able to reconstruct the events by physically inhabiting the scene of the crime. He didn’t sleep that night and instead sat up like a sentinel, awaiting his father’s ghost.
In 1994, after his mother died, Olson went so far as to exhume his father’s body, an investigation that turned up signs of foul play and opened up another wormhole of bitterness; not only was his father drugged without consent, but now he had potentially been murdered by the CIA. The dark questions kept coming, along with the doubt; the paranoia; the looping, seasick circles of what-ifs and Who is responsible? and Is everything, everywhere, a lie? Olson is a man possessed, a man whose curiosity is his tragic flaw, a man whose brain went to Harvard but whose heart never progressed past being nine years old and lied to, a lanky, loquacious loner with an ax to grind. He feels that Hamlet is the scaffolding that has wrapped around his quest. He is smart enough to know that this is a tragedy.
Morris, who dropped out of a graduate program at Berkeley in philosophy because “it was just a world of pedants,” has always seen himself as a kind of cinematic outsider. In 1988, he achieved his first major success as a documentarian with The Thin Blue Line, which told the story of Randall Dale Adams, a Texan man wrongfully sentenced to death for the murder of a police officer. Through crime scene re-enactments and a series of interviews, Morris presented a valiant case for Adams’s innocence, a case that led to the overturn of his sentence and his release in 1989. (Adams died a recluse in 2010.) Despite its impact, the movie did not fare well in awards season. The Thin Blue Line “was passed over for an Academy Award nomination,” Morris told Vulture in 2015, “because some people were appalled by the use of re-enactments”:
But there are re-enactments, and there are re-enactments. There are re-enactments that are known in the motion picture business as “show and tell” re-enactments. The re-enactments in The Thin Blue Line were not show and tell. It’s one of the things that makes them unique.
Morris used re-enactments not to shut down questions but to open them: If a witness was standing there when the shot was fired, what could she really see? This technique may seem old hat to us now. But in 1988, this sort of documentary filmmaking—what Morris has called “an essay on false history”—was something altogether new and in many ways baffling to the Academy. Was it advocacy? Journalism? Novelistic nonfiction?
Not only has Morris’s work continued to evade categories in the decades since, but many other filmmakers have followed his lead. “A whole group of people,” he has said, “literally everyone, believed a version of the world that was entirely wrong, and my accidental investigation of the story provided a different version of what happened.” This approach in many ways set the template for the true-crime boom of the last few years: Without it, we wouldn’t have Serial, or The Staircase, or The Jinx, or Making a Murderer, or even American Vandal, a parody of these shows set in a high school.
What was Morris to do, in the world he made, with the story of Frank Olson? Wormwood uses a lot of his old tricks but brings in a star-studded cast (Peter Sarsgaard, Tim Blake Nelson, Bob Balaban, Molly Parker) to play out possible versions of events. These scenes are not just re-enactments but, like Hamlet’s play within a play, something more inspired. Full of midcentury costumes and hotel neons, centering on an actual case of vertigo and conspiracy, they combine the elements of a Hitchcock film. Everyone is shifty-eyed, wearing tailored suits; a vague wash of acid green hangs over the city streets.
Morris has also said that if he gave Wormwood a subtitle, it would be The LSD Was a Red Herring, because while the CIA seems to imply that Frank Olson’s death was a result of the controversial and at times illegal experiments of the 1950s, both Eric Olson and others investigating the case find that the truth may be even more sinister. And yet, LSD permeates the series as a visual language; Morris makes a meal out of swirling cameras and hallucinations and the sense of fog that descends over Frank Olson (Sarsgaard) and never seems to leave. The early episodes move slowly, like the vision of someone in a twilight state, a gradual sublimation into Eric Olson’s mind. Once the story reaches its maximum saturation, however, it starts to veer off into wild directions, and its paths begin to tangle and become less clear.
Later in his life, Eric Olson discovers that his father was perhaps considered to be a dissident who had major concerns about the work he was tasked with in the lab. The LSD administered to him at a cabin in the woods was not, Olson learns, an innocent trial meant to see what happens when a scientist gets high, but a pointed effort to get him to confess his beliefs and show his most dangerous tendencies under the effects of a truth serum. If this is the case—I won’t reveal Olson’s reason for believing it is—then, as Morris infers, Frank Olson may have been not just murdered but executed.
This is where the story gets deeply messy and most compelling, when the son seems to come closest to touching the truth of his father’s demise and yet also to cracking up himself. The journalist Seymour Hersh, who worked with Olson to discover classified information about his father’s death, tells Morris in the last episode that he cannot publish the story without burning his source, and that, at least as of this year, he may never be able to reveal exactly what he knows about Frank Olson’s death at the Statler Hotel. What Hersh does say is that “Eric knows,” and that perhaps that should be enough. He also pleads with Morris not to make the film entirely about the state of media. “Don’t make this a big deal about journalism,” Hersh says. “About an amazing, sensitive, credible thing. Well, duh. The source is more important than the story, always.”
Eric Olson, Morris accepts, may never get the conclusion that he desires, even if the film reignites an official investigation into his father’s death. Little can change the fate of father or son. Wormwood is the tragedy of two men, both of whose lives were taken away from them on the same night. With the beautiful historical accuracy of his narrative and the most candid interviews he has ever done, Morris has created a work of true crime suspense that certainly crowns his own work, along with anything else available to stream now. But the pill is hard to swallow. Wormwood, the herb, was often used to make a harsh medicine, a tonic that burned on the way down even as it was meant to cleanse the body. Morris’s documentary works the same way—it exposes malfeasance, allowing Olson one last chance to air his trauma. And yet, even after truth comes to light, the aftertaste is bitter.
Off the coast of Louisiana, in an 8,000-square-mile swath of ocean, the marine life is suffocating to death. Nutrient pollution, flowing from the Mississippi River into the Gulf of Mexico, is birthing huge clumps of algae that are sucking oxygen from the water. Without oxygen, fish are struggling to respirate, and fleeing the scene en masse. Plants and worms, being unable to flee, are wasting away. This so-called “dead zone” in the Gulf has been appearing every spring and lasting until the winter since monitoring began in 1985, but this year it grew to the size of New Jersey—the largest dead zone ever recorded in the world. Next year it will probably be bigger.
If you missed this news, that’s probably because this year’s news read like the script of a cli-fi drama. The U.S. had a record-breaking hurricane season, which begat yet more environmental destruction: Hurricane Harvey caused a toxic Superfund site overflow, excess carcinogenic air pollution, and a chemical plant explosion in Texas. Hurricane Irma caused millions of gallons of sewage to overflow all over the state of Florida. Hurricane Maria created a drinking water crisis in Puerto Rico (and may have killed more than a 1,000 people). California, meanwhile, is still dealing with the deadliest and most destructive wildfire season on record, also with environmental after-effects; in the Bay Area, toxic ash laden with heavy metals infiltrated the soil and made air unbreathable.
These catastrophes deservedly got wall-to-wall coverage on cable news, and front-page treatment in major newspapers. But in focusing on one kind of environmental disaster—extreme weather—many outlets overlooked another kind that’s destructive in its own right. About that dead zone in the Gulf, University of Michigan professor Don Scavia said, “Meat production is directly causing it.” This year, the meat industry has also been blamed for contaminating drinking water across the Midwest, destroying native species’ habitats, and increasingly for its role in causing global warming.
The meat industry’s main problem is its reliance on corn to feed animals. In 2016, corn crops caused most of the 1.15 million metric tons of nutrient pollution—excess nitrogen and phosphorus, mostly from fertilizer runoff—that was released into the Gulf. Thirty-six percent of those corn crops are used to feed chicken, cows, and pigs, most of which are eventually eaten by humans. As meat production increases, corn demand rises, producing more nutrient pollution and a bigger dead zone. The dead zone is bad for obvious reasons—as a concerned citizen once told Scavia, “8,000 square miles of no oxygen has got to be a bad thing”—but it also has consequences for humans, as it could decimate the Gulf shrimp industry.
Nutrient pollution has caused drinking water problems, too. According to the Environmental Protection Agency, “more than 100,000 miles of rivers and streams, close to 2.5 million acres of lakes, reservoirs and ponds, and more than 800 square miles of bays and estuaries in the United States have poor water quality because of fertilizer pollution.” Because of this, and how much meat contributes to nutrient pollution, the environmental group Mighty Earth released an investigation in August deeming the U.S. meat industry the largest source of water contamination throughout the Midwest. The pollution is “linked to cancer, birth defects, thyroid problems, as well as a serious condition called Blue Baby Syndrome, which lowers the amount of oxygen in infants’ blood,” the group claimed.
Mighty Earth also blames the meat industry for the destruction of American grasslands and prairies to make way for more cropland: One third of all land in America is used either to provide pasture for animals that will be eaten, or to grow feed for them. This practice “destroys the remaining habitat of native species like monarch butterflies, bees, pheasants, and prairie dogs, whose habitat has already been shrunk by 150 years of prairie clearance to serve agriculture,” the group says. Grassland and prairies are natural buffers, protecting waterways from pollution; the destruction of these habitats increases fertilizer runoff. “From feed to slaughter, our analysis found the meat industry to be the driving force behind some of the most urgent environmental crises facing our country,” the report read.
And then there’s the climate impact of meat. This year saw more warnings that meaningfully reducing global warming will require reducing emissions from animal agriculture, which make up 14 to 18 percent of global greenhouse gas emissions—more than the entire global transportation sector. Those emissions largely come from deforestation (since trees absorb carbon dioxide) and methane-rich cow farts; this year, NASA scientists revealed that the methane released by cow flatulence is contributing far more to climate change than previously believed. Two controversial peer-reviewed studies this year also showed how humans could meet international climate targets by changing their diets to replace meat with beans or bugs. (Diet changes could also prevent dead zones too, as U.S. fertilizer use would be cut in half if Americans switched to a mostly meatless, fish-heavy Mediterranean diet, according to one peer-reviewed study.)
These meat-related environmental issues aren’t likely to be addressed by the Trump administration. Trump, for one, doesn’t care about the climate impact of anything; he’s made that clear with his appointment of climate deniers to run almost every major agency that exists. His secretary of agriculture, former Georgia Governor Sonny Perdue, hails from the country’s top chicken-producing state and has “received hundreds of thousands of dollars in campaign contributions from agribusiness,” according to Bloomberg. In April, Perdue threw out an Obama-era rule intended to protect small farmers from bigger meat companies, signaling to farmer advocacy groups that the administration bends to the will of so-called “Big Meat.” Next year, Congress is supposed to introduce the quinquennial Farm Bill, which among many other things can fund programs to help nutrient management. But if it’s anything like Trump’s budget proposal for the USDA, it will “streamline” conservation programs intended to do just that.
But citizens need not depend on the Trump administration to address some of these problems. Big meat companies can and should be lobbied directly. Seventy-seven percent of consumers say sustainability factors into their food purchasing decisions. If people think a particular brand is destroying the planet, they are less likely to buy it. That’s why big meat companies like Tyson Foods are constantly promoting their sustainability initiatives and countering public criticism. Tyson’s senior director of public relations Gary Mickelson told Modern Farmer that Mighty Earth’s report was unfair because the company doesn’t farm the corn or even raise the cows. “It’s important for us to point out the supply chain, and say, hey, we’re not involved in the crop production business, and frankly, we own very few farms,” Mickelson said.
But the big companies like Tyson do dictate the way the entire crop market operates. If they demand that their suppliers have more sustainable fertilizer practices, those farms will be forced to change. As Modern Farmer points out, “these companies are the fulcrum in the entire system—the only entity with enough sway to truly change the way the whole system works.” In a year when it appears hopeless to convince the government to do anything to improve the environment, some targeted activism against the meat industry might actually make a difference. Making a conscious choice to eat less meat could help, too.
Doug Jones, a Democrat, should not have won a Senate seat in Alabama. Sure, there were polls in the run-up to Tuesday’s special election showing Jones with a ten-point lead, after allegations surfaced that his Republican opponent, the disgraced former judge Roy Moore, had molested and preyed upon teenage girls decades ago. But Alabama, one of the most conservative states in the country, hasn’t had a close race for national office in decades, and the polls are notoriously fickle. Donald Trump won the state by 28 points in the 2016 election. Under normal circumstances—or even slightly abnormal circumstances—a Republican would win.
But these are not slightly abnormal times—they are extremely abnormal ones. So Doug Jones will be the junior senator from Alabama, eking out a narrow victory over Moore, thanks in large part to a surge in African-American turnout. Moore’s voters, meanwhile, didn’t turn out in the numbers he needed, either because of the allegations against him, the fact that he was expected to win in spite of them, or both.
There is no way to overstate the significance of this upset. A Democrat will hold a Senate seat representing Alabama, shrinking the Republican majority to 51. Roy Moore will not hold national office, saving the country from yet another embarrassment. The Democrats’ chances of flipping the Senate in 2018 got a whole lot better, reducing the number of seats it needs to flip to two. The GOP’s tax reform could be in jeopardy. And it shows that the gamble the Republicans took with an utterly compromised figure like Moore—not to mention Donald Trump—was a spectacularly poor one that will result in Democratic victories in 2018 and beyond.
It’s tempting to downplay the importance of Jones’s victory. Moore, after all, was a uniquely terrible candidate. While he undoubtedly reflects the pathologies that have defined the Republican Party over the past decade—most conspicuously in his politics of white grievance and white victimhood—he was in many ways an aberration, equating homosexuality with bestiality, suggesting that Muslims shouldn’t be allowed to serve in Congress, and arguing that families were better off in the slavery era. A run-of-the-mill Republican who supported Trump would likely have prevailed.
Still, over the past year Democrats have shown again and again—most spectacularly in last month’s elections in Virginia—that they can compete anywhere in the country, including Alabama, Montana, and Georgia. Jones ran a model race, and his campaign points to two factors that will play an important role in the 2018 midterms. The first is that Jones, a federal prosecutor who put away two of the KKK members who killed four girls in a Birmingham church bombing in 1963, was a strong candidate who didn’t shy away from his values. Many pundits pointed to his pro-choice beliefs as proof that he was out of step with the voters in his state, but Jones was able to present himself as a politician with integrity in an election where being a politician with integrity mattered. It’s almost a cliche at this point—or at the very least a Twitter meme—but turnout out of the base is crucial, and the Democratic base, in Alabama and elsewhere, is composed of minority voters and women.
The second factor is arguably more important: Republican voters are demoralized because the Republican president is enormously unpopular and the Republican Party has spent the past year doing enormously unpopular things. Yes, the Moore allegations made a big difference in this race. But they obscure the most important aspect of the special election in Alabama, which is that Republican voters are staying home across the country, while Democrats are voting at unprecedented levels.
Right now, the Republican Party is in its worst-case scenario. Trump won the 2016 election in part because he was able to convince many voters that he wouldn’t push typical Republican policies, which voters overwhelmingly recognize as benefiting the rich. Many voters decided that this aspect of his candidacy overwhelmed larger concerns about him personally. But because Trump has governed as a typical conservative Republican while changing nothing about his personality, Republicans are in a bind: Voters hate the president and hate their agenda. Seemingly the only reason they have to vote is tribal antipathy toward the other side. And if that didn’t work in Alabama, where a pro-choice Democrat just won a Senate seat, Republicans are in deep trouble.
There could be some negative consequences from Jones’s victory. With the GOP’s Senate majority down to one vote, Republicans could speed up tax reform and other harmful legislative items. Jones’s narrow victory, in spite of Alabama’s onerous voter ID laws, could result in a surge of calls for even more restrictive voter ID laws designed to curb minority turnout. But right now, none of that matters. Doug Jones won, Roy Moore lost, and the GOP is in deep trouble. That’s what matters.
President Donald Trump is finally in the crosshairs of the #MeToo movement. Four women who have accused him of sexual misconduct repeated their allegations on Monday—that Trump ogled, groped, and kissed women without their consent. In response, New York Senator Kirsten Gillibrand joined a growing list of prominent Democrats who have called for Trump’s resignation. “President Trump has committed assault, according to these women,” she said on CNN. “And those are very credible allegations of misconduct and criminal activity, and he should be fully investigated and he should resign.”
Trump has “complained privately that the avalanche of charges taking down prominent men is spinning out of control,” according to Politico report on Monday, so it was hardly a surprise when, on Tuesday morning, he attacked the female politician who has been most vocal about holding powerful men to account:
Though limited to 280 characters, Trump managed to hit many sexist tropes: women are weak (“lightweight), women are subservient (“flunky” and “begging”), women are duplicitous prostitutes (“would do anything for them”), women are betrayers (“very disloyal”). Gillibrand’s response was righteous and steadfast:
Once sexual harassment became a pressing political topic, it was inevitable that these two would clash. It’s not just that Trump has been accused of misconduct more than a dozen times, and has a well-documented history of demeaning and belittling women; he also rose to the presidency by fomenting sexist doubts about his opponent Hillary Clinton. Gillibrand, meanwhile, has made combating sexual harassment and assault her signature issue, more so than any other politician in America. She hasn’t shirked from policing her own side, either. She was the first Democratic senator to call on her colleague Al Franken to resign, and she’s broken with party orthodoxy on Bill Clinton, saying that he should have resigned once the facts of the Monica Lewinsky scandal were known.
The contours of the Trump/Gillibrand feud lay out the political landscape of contemporary America, and could prefigure the role gender politics will play in the 2018 and 2020 elections. Still, for Gillibrand and the Democratic Party, there remains the problem of how to combine critiques of particular men like Trump, which are popular with the party’s base, with a broader policy agenda for fighting sexual harassment—one that appeals to nonpartisans, too.
The GOP response to Trump’s tweets shows how challenging that will be. While the president and Gillibrand traded blows, some Republicans in Congress didn’t just refuse to comment; they refused even to be told about the contents of the tweet:
Iowa Senator Joni Ernst took a similar tack:
The unwillingness of major Republicans even to acknowledge the contents of Trump’s tweets suggests how radioactive they are. College-educated white women, traditionally a mainstay of the GOP, are drifting away from the party in the Trump era. The president’s antics will make the problem worse, which presents an opportunity for Democrats like Gillibrand. As she emerges as a national leader and a strong contender for the Democratic presidential nomination in 2020, her main task will be to convert the bold stance she’s taken against men like Franken and Trump into an expansive message about achieving gender equality in America.
Since 2012, when she saw the documentary The Invisible War, Gillibrand has been on the forefront of pushing for the military to crack-down on sexual abuse. Her tough grilling of military leaders has earned her enemies in her own party, including stalwart Pentagon allies like Senator Claire McCaskill and now-retired Senator Carl Levin. Gillibrand’s signature reform measure, the Military Justice Improvement Act, has met with resistance because it would weaken the power of military commanders in overseeing rape and assault cases. It fell five votes short of the 60 needed, but gained genuine bipartisan traction.
The test for Gillibrand will be on the policy front. Can she turn the anger of the #MeToo moment into a lasting political movement by mainstreaming an agenda that puts gender equality front and center? Such an agenda would carve out a political niche for Gillibrand, but it’s also risky, as it could be seen as narrower than, say, the economic populism of senators Bernie Sanders and Elizabeth Warren. As the example of the Military Justice Improvement Act shows, Gillibrand will face resistance in her own party. Moreover, Trump and other Republicans will try to stoke an anti-feminist backlash.
That’s not likely to stop Gillibrand. She has shown, in her campaign to reform the military, that she has the political courage to stake out a controversial position. And to her credit, she has always seen sexual misconduct as a systematic problem, going beyond the misdeeds of a few men. So while she might be calling on Trump to resign, this is surely just the beginning of a serious effort at broader institutional reforms.
There’s an element of political theater to calls for Trump’s resignation, since they obviously won’t lead to his removal. But the real goal is to use Trump’s undeniable sexism to make issues of sexual harassment visible and politically salient. If Gillibrand is sworn in as president in 2021, it’ll be because Trump was the ideal foil for her. He might yet have a lasting legacy in helping elect the first woman president.
An explosion of online literary criticism is settling down into an interesting heap of debris. Over the last few days our feeds—I say “our,” since together we form the kind of collaborative group that Stanley Fish would call an “interpretive community”—have rumbled with talk about “Cat Person.” Kristen Roupenian’s story of a bad date has invited a lot of different readings and prompted a lot of good conversations. We’ve talked about genre, narrative voice, “relatability,” and creating bodies with words. Some of us were unimpressed, while others snarled at the unimpressed’s incomprehension of Roupenian’s achievement. All agreed that it was an unusual phenomenon, a short story going viral, and that was a kind of news.
Since the story was about the awkward, sometimes menacing push-pull between a young woman and her older male date, various outlets have woven this piece of fiction into the ongoing conversation about sexual harassment, as if it were a personal essay or reported piece. “It has women saying, in other words, ‘Yeah, us too,’” wrote Olga Khazan in The Atlantic. While it is totally legitimate for a reader to respond that way, as an approach to criticism it turns the story at hand into a tool for digging in the hole of reality, rather than an imagined world that has its own rules.
The truly interesting thing about those articles, however, is that they demonstrate the huge gap between the new literary criticism taking place online and the media’s ability to respond to it.
In this case, the media has been thrust in the position of the literary critic, drawing lines between the artwork and the broader culture. This isn’t a bad development, exactly—it’s great that a short story is making headlines. But it is also worth noting that the boundaries of literary criticism, at least as they are traditionally conceived, are being exceeded across the internet. The response to “Cat Person” is the latest evidence that we have entered new territory for online criticism, and no one quite knows what to make of it.
Rupi Kaur is a good example of the unsatisfying way in which literature, social media, and criticism intersect these days. When the 25-year-old poet hit the bestseller list, she got the smirky profile treatment in The Cut, which made Kaur look a bit silly for caring more about book jackets (“I’ll collect a lot of covers that inspire me”) than their contents. This gentle mockery stemmed from Kaur’s astonishing career trajectory. She seemed to have spun her popularity on social media into literary fame, to the detriment of Poetry writ large.
But this missed the key to Kaur’s appeal, which, in Instagram and in print form, lies in her facility with graphic design. She uses words like objects, placing them in relation to other objects (her own face, a flower). Even though her poetry is not very good in my opinion, I think that her Instagram posts are very good, according to the particular visual standards of that medium. But “criticism” as a genre wasn’t capable of expressing that appreciation, partly because there is no Times critic who writes on social media, about social media. Those who understand Kaur’s appeal are left to have their own conversations, in their own app.
“Cat Person” is a short story, not a book, so until Roupenian publishes her collection it is not eligible for a review in one of the influential old literary criticism hubs. Instead, it has been treated as a quasi-news story, to be caught before its moment on Twitter has faded. It is being digested by critics whose job it is to digest cultural news, then regurgitated to readers as more fodder for the news cycle.
So when a literary phenomenon happens on social media, readers get the story-about-the-story, a commentary on how the conversation played out before it’s even finished. It’s the “Here’s Why You Can’t Stop Talking About ‘Cat Person’” style of take, and it treats you—the conscious and collaborative reader—like a consumer. This state of affairs is horribly unfair. It does no justice to the richness of literary conversations online.
But then again, neither do book publishers. The marketing of literature on social media remains an embarrassment to readers and writers. In trying to sell debut novels, a book publicist will cannibalize the tools of Kaur-esque Instagram-lit and photograph the book’s jacket on a pretty table, beside a manicured hand and a bunch of flowers in a ceramic vase. Again, this is a mismatch: The publishing houses treat the book like a lifestyle commodity, while trying to tap into social media’s interest in literature. Thousands of people on Twitter did not go wild over “Cat Person” because they love lattes and tulips.
Literary critics, news journalists, and marketing specialists alike are all failing to connect with this online community of readers. Each group is hanging in difficult tension with the idea of belonging to that community. The latte-posting corporate account is trying to say, “Hey, I’m one of you—buy my stuff!” The aloof literary critic is saying, “I’m not one of you, and I’m going to pretend you don’t exist.” Worst of all, perhaps, is the journalist who is neither here nor there, both outside and in, merely reflecting what you have already thought, but not quite.
If we return to Stanley Fish, we might get a better sense of where the journalist goes wrong when he reports on online readerships. When a writer comes along to a conversation and inserts his take, his claim is that “his interpretation more perfectly accords with the facts,” as Fish writes in his 1980 work Is There a Text in This Class? Here’s what’s really going on with “Cat Person,” the writer opines, and cuts through the wreathing fog of ignorance that surrounds it. But the commentator’s actual purpose in this effort is to convince us of “the version of the facts he espouses,” by, first of all, making us submit to a certain framework at the outset—“the interpretive principles in the light of which those facts will seem indisputable.”
In other words, the way that a journalist presents the conversation (“A Viral Short Story for the #MeToo Moment”) conditions us to believe that he has the answers, even though “the facts” in this case are an invention. What he is really doing is layering more fictions on to fictions. The writer’s “greatest fear,” Fish writes, is that “he will stand charged of having substituted his own meanings for the meanings of which he is supposedly the guardian; his greatest fear is that he be found guilty of having interpreted.” Journalists like to pretend that they’re flourishing the noble sword of truth, and to prove it will dabble in the “aggressive humility” of posing as a reporter when really they’re acting like critics. They pretend merely to describe a phenomenon, when really they are doing what every other social media user is doing, all day long: interpreting.
Ironically enough, interpretation is precisely what Kristen Roupenian has described as the core of “Cat Person.” In an interview with New Yorker fiction editor Deborah Treisman, she said that a bad-date incident in her life “got her thinking about the strange and flimsy evidence we use to judge the contextless people we meet outside our existing social networks, whether online or off.” The only thing that characterizes the woman’s sense of her date is his volatility. Her image of him is “based on incomplete and unreliable information, which is why her interpretation of him can’t stay still.”
As a journalist who is guilty of Fish’s charges, it seems to me that this is what thinkpiece authors are doing when they “report on” the life of literature on social media. When I sit down to deliver my take, I have to pretend that I’m uniquely privy to “the facts,” while simultaneously convincing you to see things in a light of my own casting. But in this particular case, it won’t do. “Cat Person” is a story about a lack of information, and the layering of interpretations that its protagonist assembles to “understand” her date. Reader: beware anybody who comes to you promising investigative truths about fiction. Reporting on literature is “interpretation in another guise because,” Stanley Fish writes, “like it or not, interpretation is the only game in town.”
Martin Luther shattered Christendom and transformed the West, making the modern world possible and inevitable. This is the consensus among historians, whether they see this sixteenth-century monk as an epochal hero, bringer of enlightenment and tolerance, or as the pre-eminent agent of our spiritual and cultural ruin. In other words, Luther is at the center of a long and contentious dispute about the origins and nature of the modern world.
Granting Luther’s courage and his gifts, which were great enough to make his legend almost plausible, almost able to stand up to informed and critical appraisal, I will propose that, to put the matter plainly, this is not the way history works. The opposing sides in the debate about Luther are both deeply invested in his legend, which has given it seemingly unquestioned authority. This matters because what might be called the legitimacy of the modern era, which emerged in the fifteenth and sixteenth centuries, is a question that often takes as its point of departure that gaunt monk nailing his theses to a church door. It is strange to speak of the “legitimacy” of an era. But it is appropriate in this case because many writers have for a long time believed that the modern ought not to have happened, that the world before Luther was a worthy human habitation, and that after him it was a desolate place, oppressive to the human spirit despite its material brilliance and success. They interpret the transformation of the European world as the fall of an ancient religious order and, depending who tells the tale, as the rise of a soulless individualist materialism, or something of the kind. A recent book announces that Luther “rediscovered God,” clearly an overstatement. More typically the modernity he is supposed to have initiated is treated as essentially secular. This is also an overstatement.
There were people of very great consequence active in Luther’s world—the Grand Turk, Charles V of Spain, Francis I of France, Michelangelo. Among Luther’s other important contemporaries, Copernicus surely deserves a mention, and Magellan and Columbus. And there was the Henrician Brexit, the withdrawal of England from papal and Roman authority by Henry VIII that would make Britain, for the purposes of European power alignments, Protestant, a matter of great historical consequence. Henry VIII did not owe any debt to Luther or reform his church along Lutheran lines. There had been near-breaches between Rome and various English kings over the centuries, preparing this final one. Yet great political alliances, and new conceptions of the earth and the heavens, are not assumed to have a part in the emergence of the modern whenever Luther is put at the center of the frame.
In historical terms, Luther is singular in the fact that his place is secure, even despite the whole power and weight of papal opprobrium, of outlawry and condemnation for heresy, that were brought down upon him. He had important precursors whom he names, who have effectively disappeared from history under this same weight, notable among them the fourteenth-century theologians John Wycliffe and Jan Hus. For generations the followers of these precursors were suppressed, they and also their books publicly burned. Though the influence of their movements persisted for centuries, and though they may indeed share with Luther some credit or blame for the making of the modern world, they have slipped into obscurity. It seems that the word “heresy” impresses historians deeply, that it carries the suggestion of an irrational, possibly sinister zeal, marginalizing all those who are stigmatized by it. Luther’s giantism is in part an effect of an isolation from his context that is deeply misleading, however well it serves hagiographers and demonizers.
Luther was a brave and brilliant man, but he was not the innovator he is made out to be by those who venerate him, those who detest him, or those who simply carry on in milder form with the assumptions about him that were established in the old days of raging religious polemic. One scholar finds that “in the late Middle Ages there were basically two options with regard to salvation; the teachings of Thomas Aquinas and William of Ockham.” This statement is typical in assuming that all thinking on this great subject was confined to the upper reaches of Latinate intellectualism. Luther’s actions and his doctrines need to be looked at in light of the fact that there was an emerging vernacular intellectualism associated with the strong and persisting movements of religious dissent to be found throughout Europe from at least the twelfth century forward.
Many of these dissenting groups shared central tenets among themselves and with Luther as well. In fact their beliefs were consistent enough with one another and over time to constitute an alternative religious tradition, occluded but never extirpated by the dominant church. Such movements are often and dismissively called populist, though their doctrines are as moving and refined as any Europe produced. It is characteristic of these movements that their leaders are highly educated: Professor Wycliffe, Professor Hus, Professor Luther. Their books were destroyed as often as they were found, and yet they had a potent literature.
Luther read the letters of Jan Hus from a century earlier. He had Hus’s letters published in Wittenberg, provided a preface, and wrote admiring comments in his copy. Thomas Cranmer owned a copy, as well. Luther says of the anonymous Theologia Germanica, written around 1350, that it taught him more about the nature of God than anything other than the Bible and St. Augustine. The Theologia, dating from the time when Wycliffe was active, is an early example of a German-language metaphysics written to be accessible to those who were not educated to read Latin, the language of the church and the universities. Meister Eckhart, perhaps the first writer of theology in German, lived until 1328. Johannes Tauler, his younger contemporary, whom Luther is known to have read, preached and wrote in German, and is considered an early master of the language. Like Eckhart he was a Dominican, a leader in a largely lay movement called the Friends of God. Its adherents were men and women from all ranks of society. It emerged in and spread through Bavaria, Switzerland, the Rhineland, and the Low Countries. Tauler’s surviving sermons are exquisitely gentle. The Theologia envisions a reconciliation of Jesus with Judas. It is a lovely thing, not mysticism so much as an exploration of religious consciousness within the terms of Christianity. Though its German is said to be unpolished, its thinking is certainly not. It would seem that Luther’s thought should be considered in light of acknowledged influences, and that these books would be of particular interest because they had, tellingly, an underground life. They are seldom mentioned.
In nailing his Ninety-Five Theses to a church door, Luther perhaps intended no more than to provoke an academic debate. Church doors were regularly used then to announce public events. The theses’ explosive reception could well mean that the questions the professor had posed for debate were live issues far beyond the university. The printing press, developed in the 1440s, was a factor, of course, in spreading these debates among a wider range of people. The Protestant Reformation could hardly have happened without it. And Luther was important in establishing print culture, since his books and pamphlets would be in high demand all over Europe for decades. Luther wrote in German or translated his Latin writing into German. That population so seldom noticed by historians, the literate who did not read Latin but were eager for access to theology, scripture, and history, and current affairs as well, were highly receptive to what he wrote.
The emergence of this public, a middle class of sorts able to buy books, is demonstrated in the surge in publishing that characterized the Reformation period. Letters of Obscure Men, a book associated with Ulrich von Hutten, which predated Luther’s Theses by two years, is a satire on the church and the universities, intended as a defense of the humanist and Hebraist Johannes Reuchlin. It was banned on publication and was nevertheless read all over Europe. The survival and distribution of forbidden books in premodern Europe and even the numbers in which they were burned would say at least as much about the intellectual culture of the time as any university curriculum. Early presses could be dismantled and moved piecemeal, a fact that was still important at the time of the American Revolution. This made the banning of printed books much less effective than one might suppose. Before the press, we must assume organized and continuous copying of manuscripts.
Another factor very seldom mentioned in accounts of Luther’s world is the flourishing of a dissenter nation bordering Germany and deeply entangled with it. Bohemia had been Hussite for a generation when Luther was born and it remained Hussite for a generation after his death. In 1412, Jan Hus, a priest and professor at the University of Prague, took offense at the sale of indulgences there and preached against them. As was often the case, the indulgences were offered by the pope to raise money for a crusade, this one against the king of Naples. Hus’s preaching led in due course to papal bulls being sent to Prague excommunicating him. Students burned the bulls. Hus was forbidden to preach, so he left Prague and traveled around the countryside, preaching in Czech. He was summoned to the Council of Constance to defend himself against the charge of heresy. He told his judges there that he could not recant any teaching of his that could not be proved wrong on the basis of Scripture. Though he had come to the council on a safe conduct, he was imprisoned, then burned, together with his books, in 1415.
Though Hus had died a century before Luther posted his theses, his fate was still well known. Executions of the kind were meant to be cautionary. Hus’s death was made especially memorable by the fact that it caused outrage in Bohemia that spurred a revolt and changed the character of religious culture in the very center of Europe. Hussite Bohemia split with Rome so radically that it became the object of five papal crusades, all of which it repelled. These are not events that could have passed unremarked in Europe at large. Crusader armies were made up of soldiers from many countries, who can hardly have failed to bring back news from Bohemia. One of these armies included a thousand English archers, most of whom no doubt returned home, since the crusader army disbanded before battle was engaged.
Perhaps Shakespeare, who, like Luther, did not live to see the end of Hussite Bohemia, gave the landlocked country a coast because it was singular, elsewhere and otherwise, like Prospero’s island. The Hussites were largely peasants, supported by like-minded nobles. Shakespeare’s scenes set in Bohemia use peasant characters for comic effect. But the words Perdita speaks, defending the aesthetics of the natural over the artificial and refined, could be applied as well to the ambitious use of demotic language, a practice that, at the time Shakespeare wrote, was still new. Hus’s followers had made a translation of the Bible that was as important to their language as Luther’s would be to German. And a literature grew up around it, said to be very fine, though it was lost in the extirpation of Bohemian-language culture that followed their final defeat by Rome in 1620. When Shakespeare wrote, however, this fragile, rather democratic experiment was still being made.
When Luther read Hus’s letters he said, famously, “We are all Hussites.” Behind Hus was John Wycliffe, the distinguished Oxford philosopher and theologian also denounced as a heretic in 1415 along with Hus, though he had died in 1384. With others, he made the first complete translation of the Bible into English. He was associated with the poets Geoffrey Chaucer, John Lydgate, and John Gower as well as William Langland, the itinerant cleric who wrote Piers Plowman. Wycliffe wrote and preached in English as well as Latin, so his influence was intense at home and significant in Europe. Though he wrote before the printing press, copies of his books were numerous enough to make good bonfires. Hundreds of them were burned in Prague during Rome’s early attempts to stem his influence there. Hundreds more were burned in Rome and, of course, in England. Before his death, while he is sharing out among his friends things he has left in Prague, Hus tells one of them he can have any of Wycliffe’s books he might want. Lollards, Wycliffe’s followers in England, were students and lay preachers who surreptitiously, by dark of night, brought Scripture in English to the poor.
In this they were like the Waldensians, also known as the Poor Men of Lyon, whose origins can be traced to 1170. They too used vernacular Bibles and produced a literature, of which a few long poems survive. Like the Lollards, they withstood suppression and persecution until they were finally absorbed by the Reformation. Waldensians brought the belief in Communion in two kinds to Bohemia, where it became central to Hussite identity. Hus corresponded with Sir John Oldcastle, the Lollard knight who led a revolt against Henry V. Jean Calvin’s cousin, Pierre Robert, called Olivetan, made a new translation of the Bible from the original languages for the French Waldensians. The young Calvin seems to have written the introduction. Connections like these arise at random as one reads in the period, which suggests that there are many more to be found. Luther asks, “How many Saints must you imagine those of the inquisition have, for some ages, burnt and killed, as Jan Hus and others, in whose time, no doubt, there lived many holy men of the same spirit!” This was in response to Erasmus’s argument that the fathers and saints of the church should be accepted as authoritative interpreters of Scripture. The reformers also had fathers and saints.
These are important instances of the fact that Europe before Luther was by no means a religious monolith. There were ideas abroad more radical than his. Indeed, these dissident movements had already challenged the papacy and the priesthood, transubstantiation, indulgences, relics, icons, and clerical celibacy. Twelve “Lollard Conclusions” had been nailed to a door at Westminster in 1395 and were subsequently read in an elaborated form in Parliament. They say, for example, that “any man or woman in the law of God can consecrate [the bread and wine of Communion] without the supposed miracle,” and “manslaughter (either by war or any pretended law of justice, for any temporal cause, or spiritual revelation) is expressly contrary to the New Testament, which is the law of Grace, full of mercy.”
Distinctive thought was too deeply entrenched in dissident cultures to be open to negotiation in coming years, when divisions within the Reformation became important. The dependence of all these suppressed communities on the text of Scripture was a thing they had in common with Luther. But their existence outside the highly structured world of institutional Catholicism was very remote from Luther’s long immersion in monastic life, which by all accounts he embraced with exceptional zeal. Communities in hiding would necessarily forgo images and sanctuaries. They would not only cease to think of these things as a part of religious experience but, as communities, would identify with secular simplicity as a setting for true worship. And they would leave very little physical evidence of their presence, except in their books.
This is perhaps the origin of the Protestant schism. Luther was intractable in his position concerning the sacrament of Communion, but this would not have been so important if the other side had not been equally intractable. Luther’s great departure from what would become majority Protestant practice and belief was his doctrine of consubstantiation, which retained for most purposes the Catholic understanding of the consecration of the elements of the Eucharist. This may sound merely technical to people unfamiliar with the terms of theological controversy, but in these terms it has been a matter of profound consequence. As a priest Luther was deeply moved, even terrified, at performing the consecration of the host, the climax of the Mass, when bread and wine were believed by him to become the actual body and blood of Christ. As a reformer he retained a version of the Mass and insisted on a “real presence” of Christ in the consecrated elements. Sacramentarians, as he called those who took exception, felt that this doctrine preserved an unacceptable difference of status between priests and laypeople, a degree of dependence on one side and authority on the other that the Reform should reduce or eliminate.
Many modern readers, at the conscious level, at least, are not accustomed to the idea that the form of worship asserts a view of cosmic reality. In Catholicism, the priest is thought of as a mediator through the sacraments between God and his people. In Protestantism, no human mediation is called for because the relationship between God and the individual is thought of as direct. Of course the saints in Catholic tradition experience a direct relationship with God, and the Protestant traditions, in most instances, greatly value their two sacraments and their clergy. But the tendency in the first is to assume a cosmic hierarchy, and in the second to assert a fundamental equality of Christians before God. The distinction made here is far too simple, but it is justified historically by the importance in Protestantism and its precursors of lay preachers, sometimes including women, and of teaching and preaching in the vernacular. Their clergy were set apart from their adherents by preparation, five years of training in the case of the Waldensians, who memorized the gospels of Matthew and John as well as sections of the writings of Paul. This movement was distinguished by its radical poverty, voluntary at first, then, as it was persecuted and dispersed, as a fact of life. So the pastor and believer had poverty in common.
A critical-minded sacramentarian might wonder what anything other than a “real presence” might be. Jesus said, “Wherever two or three are gathered in my name there am I also.” Certainly this would describe any communion service. Flannery O’Connor spoke with some contempt of the idea that the Eucharist is merely symbolic. But when the whole of its meaning is considered, it can never be “merely” anything. Whether it enacts the sacrifice of Christ or simply commemorates the transformation of cosmic reality through the proximate human presence of God in the world, it would seem to be too replete with meaning for any of its forms to be deficient. Still, this was a breaking point, regrettable if the triumph of the Reformation was the thing to be desired.
Luther’s commitment to this outcome, the success of the Reformation movement, compounded as the movement was of groups whose theologies did not precisely align with his and were farther than his from Catholicism, seems to have flagged over time. It clearly was not unmixed to begin with. This suggestion is not consistent with the view of him as saint or as demon, but it should not be dismissed on the grounds of its unfamiliarity—quite the opposite, given the myths that have always surrounded him, the interpretation of his life that gives its impact the fundamental simplicity of an ax blow.
My model of the emergence of the modern world centers on the rising importance of literacy in the vernacular, much accelerated by the greater availability of books of every kind and their declining cost—this, of course, combined with an awareness of the ambient world, which was neither calm nor stable. The Turks were an imminent threat to Vienna, and rivalry between the kings of France and Spain was distracting the leaders of Europe from dealing with the problem. In 1527 the armies of Charles V, the elected Holy Roman Emperor, sacked Rome and made a prisoner of the pope. Abuses of sculptures and paintings, apparently committed by soldiers from Germany, aren’t surprising, since anticlericalism and German hostility to Rome had a very long history before the Reformation. Charles’s motives, which amounted to a desire to consolidate power, are somehow never quite impugned, though he certainly would have known how invasion affects a city. In this case, his armies, then plague, reduced the population by four-fifths.
Turbulence seems to have been unexceptional. Two centuries earlier, when Wycliffe and Hus were active, there were two and then three popes at the same time. One of them, a John XXIII who has lost his place in the sequence of popes, attended the Council of Constance where Hus was condemned, but fled the city alone by night disguised as a poor man to escape the demand of another Holy Roman Emperor that he resign the papacy. The council subsequently denounced John for every wickedness imaginable, including heresy. The serene order called Christendom was clearly an ideal rather than an achieved state of things, though of course ideals can have great power. In any case, the emergence of the modern world seems to have been largely an upsurge of ideas and information and oppressed dissent, new wine in old skins. It threatened and transformed power structures that had been in part shored up by the dominance of Latin, not only in religion but also in learning and law.
Luther’s Theses, written in Latin but translated and published in German, not by him, are titled “Disputation of Doctor Martin Luther on the Power and Efficacy of Indulgences.” Indulgences, as they were understood and practiced at the time, were an intrinsic part of a model of reality that saw the passage of all but the most perfect souls from death through a condition, visualized as a region, called purgatory. There they suffered for sins not absolved in life through the ministrations of the church. These sins were considered debts to God that must be satisfied by suffering. Their pain was extreme and indeterminate, the end of time being its outer limit. Dante’s Purgatorio is a regime of condign punishment, where behaviors contrary to their faults are imposed on the sufferers and embraced by them, since they are the redeemed and will ultimately enjoy heaven. Other writers immerse them in flames. In any case, the church was believed to have what is called a treasury of merits. Holy people, saints and martyrs, and Mary and Jesus above all, were believed to have merits far in excess of those necessary to salvation. These supererogatory merits were at the disposal of the pope, who could, in effect, offer them for sale. The proceeds were to be used by the church, whether to fund a crusade or, in this case, new buildings. When Luther wrote his Theses, money from indulgences was being used to build St. Peter’s Basilica in Rome. The indulgences could shorten the time of suffering of a friend or family member, though by no specified amount. It is touching and excruciating to think how effectively these campaigns raised money. The poor were asked to choose between their living families and, say, a parent who had passed through a hard life and a hard death into a state of prolonged extreme suffering.
Writers on the subject tend to say this suffering is mitigated by the vindication of God’s justice and the ultimate prospect of heaven. A contemporary of Luther’s who became St. Catherine of Genoa wrote a treatise on purgatory. In it she says that souls in purgatory feel an always increasing joy at the knowledge of God’s justice and grace. She speaks of the fires that torment them as interior, and as the flames of divine love. “Yet their joy in God does by no means abate their pain.” The selling of indulgences suggests that purgatory was not a state in which compassion would allow anyone to remain if it were possible to shorten the experience for her.
Luther proposed that “Christians are to be taught that he who sees a man in need, and passes him by, and gives [his money] for pardons, purchases not the indulgences of the pope, but the indignation of God”; and “if the pope knew the exactions of the pardon-preachers, he would rather that St. Peter’s church should go to ashes, than that it should be built up with the skin, flesh and bones of his sheep”; and “Why does the pope not empty purgatory, for the sake of holy love and of the dire need of the souls that are there?” and more to the same effect. He says, “To repress these arguments and scruples of the laity by force alone, and not to resolve them by giving reasons, is to expose the Church and the pope to the ridicule of their enemies, and to make Christians unhappy.” Since printed copies of the Theses spread like wildfire through Germany and beyond, it should probably be assumed that they did raise issues that were of great moment to “the laity.”
This cosmic model of hell, purgatory, and heaven, merits and indulgences, with all they assume about the nature of sin, merit, and grace, about the nature of God and the soul, and of suffering, and about the church and the papacy as well, were exclusive to the Catholic Church. The texts produced by other traditions, whether it was the German-literate population who read them or Luther himself, would offer another coherent cosmology according to which the grace of God is sufficient to erase all memory of fault or sin among his redeemed. The anonymous writer of the Theologia Germanica says, making the extremest case, “If the devil would come to true obedience he would turn into an angel and all his sin and wickedness would be amended and atoned and all at once forgotten.” Sola gratia, salvation by grace alone, must be understood against the background of a doctrine of God as requiring that, before salvation can be fully realized, sin must be repaid by suffering, a conception of God as in some degree satisfied by merits earned by others and applied to those fortunate mortals who left wealth or a devoted family when they died.
Purgatory and indulgences did not remain central for Luther in these terms. But the issues they raised for him persisted implicitly. For example, his detractors ascribe to him a gloomy anthropology, in response to his teaching that every human act is tainted with sin. This idea arises quite naturally from his earlier belief in the doctrine of purgatory, which is a response to virtually universal, and inadvertent, sinfulness. Cardinal Cajetan, by whom Luther was interrogated, wrote, “All possible evil is truly predicated of us and of our works in so far as they are ours. But on the other hand infinite good is truly found in the same works as products of divine grace.” St. Catherine says, “That which man judges to be perfect, in the sight of God is defect. For all the works of man, which appear faultless when he considers them, feels, remembers, wills and understands them, are, if he does not refer them to God, corrupt and sinful.” Then she says, “For, to the perfection of our works it is necessary that they be wrought in us but not of us. In the works of God it is he that is the prime mover, and not man.” Whether Luther was aware of her or she of him, St. Catherine was his contemporary, and her thinking about human sinfulness and what he would call the “bondage of the will” is not dissimilar to his. One is in bondage either to one’s own will, which inevitably vitiates one’s actions, or to the divine will, which makes them truly good. The humanist Erasmus, in his Discourse on Free Will, intended as a rebuttal of Luther, asserts that “mankind is lazy, indolent, malicious, and incorrigibly prone to impious outrage,” and “people are universally ignorant and carnal-minded. They tend toward unbelief, wickedness, and blasphemy.”
The influence of thought outside Catholicism should not at all preclude attention to powerful influences within it. And comparisons among those who share a period and culture are always useful. Luther by himself might seem gloomy and misanthropic, but he is not at all exceptional by the standards of his time. Luther differs from orthodox writers in taking “works” to be corrupted in being self-interested, that is, in being undertaken to help secure one’s own salvation. There is an ambiguity around the word “works” that Luther himself does not explicitly clarify. In ordinary use it can mean fasts, prayers, pilgrimages, and the buying of indulgences, as well as giving alms to the poor. By his lights gestures and ceremonies can have value in disciplining oneself, but charity toward one’s neighbor, of which giving of alms is only one expression, is free and Christian when it is done without thought for one’s own benefit. Relying entirely on God’s grace in the matter of one’s salvation means abandoning all thought of merit, all spiritual self-interest.
The question at the center of the controversy surrounding Luther concerned the very nature of the church. Western Christendom seemed little inclined to acknowledge the fact that there was an Eastern Christendom, the great ancient tradition centered in Constantinople, that never accepted the papacy, the predominant authority of the bishop of Rome. For us “church” is associated with institutions—buildings and bureaucracies as well as creeds and customs and public observances. But unnumbered “house churches” have existed in every setting where Christianity or sects within it are under threat, in fourteenth-century England, in modern China, and in the communities to whom Paul sent his letters. The best we can do is to grant the word the broad meaning and the wide variety of interpretations it has acquired in the course of its history. These were fully present in the religious culture of the Middle Ages.
Luther conceived of a church in some ways very close to Catholicism. For example, the Augsburg Confession, intended to find a principled truce with Rome, retains a Latin Mass. It had been usual for generations for writers to criticize the Catholic Church for how it got money and how its money was spent, for slothful, ignorant, or immoral priests and so on. Boccaccio’s Decameron offers an example of satire on the subject, as does Chaucer’s Canterbury Tales. Luther’s criticism was more fundamental, not concerned with abuses that arose from time to time and could be addressed by institutional reforms.
In reply to Erasmus’s rebuttal of his criticisms, Luther thanks the renowned scholar for seeing and addressing what is most central to his argument, that is, his rejection of freedom of the will. In principle, theological questions always have implications much broader than the terms of debate seem to imply. As a philosophical question, free will and its antithesis, determinism, enlist psychology, biology, environment, and ethics. As a theological question, it has to do with the relationship of God to humankind as individuals, particularly in the matter of their salvation. The role of the church above all is to make salvation meaningful as a word and attainable as an object, as it has done historically through teaching and example and through its sacraments. The great teacher and exemplar is Jesus of Nazareth, whose birth was an incarnation of God, whose death was a sacrifice made to put humankind at peace with God, and whose life enacted the love and truth that would conform our lives to the nature and the will of God.
“From faith there flows a love and joy in the Lord. From love there proceeds a joyful, willing, and free mind that serves the neighbor and takes no account of gratitude or ingratitude, praise or blame, gain or loss,” and “Our trust in him means that we are Christs to one another and act toward our neighbors as Christ has acted toward us.” This is from a tract titled “The Freedom of a Christian,” sent by Luther to Pope Leo X in 1520 to clarify his position on freedom and what he called “bondage.” He received no response. Luther describes a relationship to God and neighbor most Christians would recognize at least as an ideal. But by divorcing merit from “works,” a word which here means serving one’s neighbor, and by resolving them into joy and love, he places the soul outside important structures of the church that were offered as the means of salvation. So, conciliatory as his words may sound, they actually restate his case. The pope no doubt noticed this. The following year, Luther was excommunicated.
However, the most radical challenge to the church was the doctrine of predestination put forward by Wycliffe, by Hus, and then by Luther. In support they could cite the formidable St. Augustine. But the authority and the traditions of the church were overwhelmingly against them. The issue was whether one was free to achieve personal salvation by meritorious acts or renunciations or through the good offices of the church. Predestination precluded this.
Scripture is not clear on the question of free will or determinism, or it is not interested in the distinction. Erasmus, in his response to Luther, advises a middle course between these opposites, having made a case for free will on the basis of Scripture, which he does not finally consider determinate. However, he objects strongly to Luther’s having raised the question. He says, “Some things can be noxious because, like wine for the feverish, they are not fitting. Hence such matters might be treated in discourses among the educated or also in theological schools, although it is not expedient even there I think unless done with caution. Definitely, it seems to me, it is not only unsuitable but truly pernicious to carry on such disputations when everybody can listen.” Wicked men might become worse if they were aware of these arguments. Luther, perhaps more aware of the accessibility of the thoughts and writings of Wycliffe and Hus, or of the growing pressures of print and vernacular literacy, dismisses the idea that the laity were to be excluded from these disputes. Ultimately the writings of Erasmus as well as Luther were condemned by the church.
The questions raised by the doctrine of predestination are real and profound. When Hus, then Luther, said he could not recant, did he feel as if he were acting freely or as he was destined to act? Both at once, no doubt. In any case, the mere word “predestination” tends to distract critics from the fact that Wycliffe, Hus, and Luther were indeed major reformers, men who called on others to examine their beliefs and their lives, and to adhere with a new zeal to the laws of Moses and the teachings of Christ. They were not in any sense passive, as belief in predestination supposedly implies they should be. Their immediate impact is clear. Peasant revolts arose in the time of each of them, directed not at the Reformers, as recent interpretations of popular sympathies might lead one to suppose, but against the traditional religious and political order, which would have appeared to them to be weakened. Of the three, one rebellion, in Bohemia, succeeded. Two, in England and Germany, failed. The shameless brutality with which they were crushed in both cases may be taken as a measure of the inhumanity against which they rose up.
Two terrible scandals mar Luther’s life. One was his response to the Peasants’ War, in which he urged extreme violence against the rebels. The other was his writing against the Jews, whom he assailed in very similar, very violent terms. There is no excuse to be made for this, but a reason for it might have been that the existence of communities considered heretical was tenuous. Whole villages of Waldensians had been slaughtered. Wittenberg, where Luther lived most of his life, was protected by important German princes, but to tip it in the slightest degree toward association with any disfavored population would be to put it at risk.
Wittenberg did fall, to that same Charles V who had depopulated Rome. Luther was dead by then. His wife died from injuries she suffered fleeing the city. Luther acted irrationally and discreditably toward peasants and Jews, a fact probably related to his having lived for years under a death sentence. Heresy was considered so highly communicable then that his friends, children, colleagues, and students were all imperiled together with him. He was far from the first in his culture and times to attack the Jews or despise the poor. These things contrast very jarringly with the most Christian of his insights, as they contrast jarringly with the self-perceptions of Christendom.
The Peace of Augsburg, signed in 1555, which for a while established a truce between Catholics and Lutherans within the Holy Roman Empire, did not acknowledge other Protestant groups, who had little or nothing in the way of princely protection and who remained liable to prosecution as heretics by both Catholics and Lutherans. Luther was no longer alive, but his readiness to dissociate himself from vulnerable groups seems to have survived him.
Fierce old history bedeviled Europe after Luther as it had before him. Wars there found religious pretexts or were responses to papal crusades, which, as modern historians tend to forget, were carried out within Europe itself, for example the thirteenth-century Albigensian Crusades, which exterminated an entrenched “heresy” and also greatly enlarged the kingdom of France. More than one reason can always be found for any war. It is interpretation that assigns priority to one or another. The Luther legend, the idea that one man and one set of events shattered a great order and brought down chaos, minimizes a terrible continuity in the affairs of the continent before and after the Reformation. Power was shifting as the resources and competences that come with literacy spread through the population. By what means and how extensively it spread no one can know. Depending on circumstances, it could be incriminating and was concealed. We do know that for centuries ragged cloaks hid worn Gospels and that furtive worshippers gathered at night to hear them read. We know forbidden books were cherished at great risk. And we have some of those books, which spoke to the unlearned in their own languages, and were as fluent in high thought and visionary gentleness as anything ever written by a Christian hand.
© 2017 Marilynne Robinson
Campaigning with Steve Bannon in Fairhope, Alabama, a week ago, Roy Moore struck a messianic note. “We’ve got to go back to God, we’ve got to go back to restore the morality of this country,” said the Republican candidate, who is running in a special election for Alabama’s Senate seat. Quoting Ephesians 6, Moore exhorted his supporters to “take up the armor of God” in what had become a spiritual battle for the nation’s soul. “We’ve struggled, but we’ve overcome,” Moore said. “And I think that on December 12 we’ll see an election that the world won’t forget.”
Moore has since retreated from the campaign trail, apparently to avoid uncomfortable questions about allegations that the 70-year-old molested and preyed on teenagers when he was in his 30s. But he does have reason to be hopeful that he has “overcome” those charges: After being abandoned by his party in the immediate aftermath of the allegations in October, mainstream Republicans—including President Donald Trump—have come rushing back, with the notable exception of Alabama’s senior senator, Richard Shelby. Electing a Republican was ultimately more important than Moore’s numerous drawbacks: the charges of pedophilia, his well-documented Islamophobia and homophobia, his contempt for the federal judiciary, even his bizarre comments about slavery, which in earlier years would have been enough to sink a candidate.
Moore is also right that this is an election that won’t soon be forgotten. Win or lose, Moore’s candidacy will haunt the Republican Party for a long time.
The decision to stand behind Moore is a foolishly short-term one. Trump, who like most Republicans backed the establishment candidate Luther Strange in Alabama’s primary, is eager to prove that he’s still a kingmaker, even with poll numbers in the 30s. In the last big election, Virginia’s gubernatorial race in November, Trump’s favored candidate, Ed Gillespie, lost. If Trump can’t get a win in deep-red Alabama, then that will undermine his clout with other Republicans, whose relationship with the president is partly based on whether he can help them win elections.
Trump has another reason for supporting Moore, one that he shares with congressional Republicans. The GOP enjoys a slim majority in the Senate—with only 52 Republican senators, three “no” votes can sink any legislation passed by simple majority. While Republicans were able to pass a tax reform package with 51 votes, they’ve been stymied in their attempt to repeal Obamacare. Moore, who has campaigned against Senate Majority Leader Mitch McConnell and the GOP establishment as much as he’s campaigned against his Democratic opponent Doug Jones, will be more of an unpredictable vote than a generic Republican. But he’ll still be more reliable than Jones.
Recent precedent may have also played a role in the decision to back Moore. It was Trump himself who survived multiple allegations of sexual misconduct to prevail in his election. After the release of the Access Hollywood tape in October 2016, numerous Republicans jumped ship when it looked like Trump was about to lose badly, only to come back into the fold when partisan considerations reasserted themselves. Trump won and Republicans were rewarded with a unified Republican government. All’s well that ends well.
But though Trump won the election, that doesn’t mean that his party has paid no consequences for supporting him. Trump’s victory resulted in an enormous, complex backlash that we are only just beginning understand. Early indicators suggest that Republicans are heading toward a bloodbath in the 2018 midterms, which will be a referendum on Trump’s presidency and what Trump stands for: misogyny, yes, but also much more, including xenophobia, racism, and a disdain for the rule of law.
Backing Moore only makes the GOP’s Donald Trump problem worse. Taken together, Moore and Trump represent the worst of the contemporary Republican Party. Democratic ads in 2018 and beyond will surely make the connection. An accused child groper and an admitted pussy-grabber: This is the Republican Party that must be voted out.
But it gets worse. Even before Moore came along, the Republican Party’s reputation was in terrible shape. Republicans in Congress have spent most of the year trying to take health care away from millions and giving massive tax cuts to millionaires, billionaires, and corporations—all while raising taxes on millions of middle-class taxpayers. The embrace of an alleged child predator for the sake of passing a massive tax giveaway to the wealthy is a plotline so on-the-nose it would be rejected by Veep.
And yet here we are. It’s no wonder that only 28 percent of millennials think that Republicans care for “people like them,” according to a recent poll. Trump, tax reform, Obamacare repeal, Moore—these are things that will take more than an election cycle to recover from. These are toxic people and toxic policies, and to combine them will have severe repercussions. In 2016, Trump was often treated as an aberration, by both Democrats and the media. Eleven months into Trump’s presidency, it’s simply impossible to make the argument that he and Moore are outsiders. They are the Grand Old Party, and that doesn’t change even if Moore loses, as some Republicans privately hope he will.
Democrats learned this year that the past will haunt you in unpredictable ways. Many on the left are just beginning to reckon with the consequences of Bill Clinton’s sexual misconduct and Democrats’ decision to stick by him in the 1990s. That alone should alarm any Republican supporting Moore for short-term reasons, or even out of antipathy toward liberals.
Moore may very well win on Tuesday. But regardless of the outcome, Republicans have already lost. Even a strong finish by Moore won’t help a party that’s so tethered to Trump and the culture war that it just backed a credibly accused child molester for the Senate. Moore’s candidacy is evidence of a rot in the GOP that is not simply here to stay—it’s growing.
The Republican tax bill being negotiated this week is like a Sudoku puzzle: daunting to novices, but in the hands of an experienced master, a simple case of plugging in numbers. The opportunities for gaming the new rules will make tax lawyers unspeakably rich for the rest of time, finding endless opportunities to shelter, transform, or reassign income to lower their clients’ payments. And there’s another group poised to get in on the tax loophole fun: state governments that, in a few simple steps, can and probably will nullify the biggest sources of revenue in the bill.
All of this fits with a longstanding conservative project: Starve the government of taxes, and use that as a rationale to slash social spending. This would achieve their goal of re-channeling government benefits from the working poor, who require safety net assistance, to oligarchic holders of dynastic wealth. Considered in this light, one wonders whether the hastily drafted, loophole-ridden tax plan was purposely built as a leaky boat.
In a research paper published last week, “The Games They Will Play: Tax Games, Roadblocks, and Glitches Under the New Legislation,” 13 tax scholars explained how wealthy Americans can work the tax bill for further advantage. Because the business tax rate, both for corporations and “pass-through” businesses (like neighborhood stores or partnerships), would be so much lower than the income tax rate, it gives people incentives to essentially become businesses. “To achieve the tax savings, no longer be an employee,” the paper’s authors wrote, “instead be an owner.”
For example, people could push their salaries through corporations they set up, or invest through them, sheltering earnings in a low-tax vehicle. It’s mainly a strategy for the rich, since the poor need their salaries to—well, live on. This can work with pass-throughs as well; an employee taking salary from a company will take home less than an independent contractor. There are supposed to be restrictions on “service providers” in law firms or financial companies from pass-through benefits, but they are easily evaded. Lawyers working for a firm can characterize themselves as “associates” in a pass-through business that the firm contracts for services. Associates in the pass-through can then take up to $500,000 in earnings at the lower rate.
The sneaky side benefit for corporations here is that they don’t have to provide benefits to an independent contractor, like they do a salaried employee. The tax bill forces everyone onto solitary islands for the purposes of tax strategy, but paid time off, holidays, sick days, and even health benefits could get lost in the exchange. This relieves corporations, which already get their tax rate reduced from 35 percent to 20 percent, from having to pay those benefit costs.
States can use similar tax avoidance to benefit constituents. For example, one of the biggest revenue-raisers in the bill is the repeal of the tax deduction for state and local income taxes (SALT). This disproportionately affects high-tax blue states that provide more services to their citizens, like New York and California. But Daniel Hemel, an associate professor at the University of Chicago, explained how states can get around SALT repeal.
Under the bill, employers can still deduct state and local payroll taxes. So states could just re-classify state income taxes as employer-side payroll taxes. Let’s say you earn $100 from a company in a state with a 5 percent income tax. Under current law, you would take home $95. But if the state shifted to an employer payroll tax, you would be paid your $95 without any state income taxes, and your employer would pay the $5 to the state. But the employer would get to deduct that from its federal taxes, canceling out the cost. There’s no difference for anyone, except that SALT repeal has vanished for wage income.
States have also explored ways to preserve the individual health insurance mandate, another revenue source in the bill. Some of the ideas include automatic enrollment for those eligible, a “continuous coverage” requirement that penalized those who dropped out of insurance with higher premiums later, or simply a state-level version of the mandate. These options would prevent the expected spike in insurance premiums and the increase in the uninsured. It would also reduce the $338 billion in expected federal savings, mostly because the government would still have to pay subsidies for those who don’t lose coverage.
There were certainly a host of drafting errors, late-night oversights, and flat-out mistakes in the House and Senate versions of the tax bill. Lawmakers made the corporate tax rate and the corporate alternative minimum tax the same number, effectively nullifying dozens of corporate tax credits. They built in odd cliffs that will give some high-earners marginal tax rates of 85 percent or even over 100 percent in some cases. They created an investment income standard that will deeply harm small investors.
All of these will lead to more gaming to avoid pitfalls. The research paper highlights some examples. A provision that encourages sales of products abroad could lead to “round-tripping,” where a company sells a product overseas to earn low-tax export income, only to have the product shipped right back. A one-year delay in the change to the corporate tax rate will lead companies to write off costs in high-tax 2018, or to buy equipment in 2018 and sell it in 2019, taking advantage of the rate change. “The wealthy and well-advised will benefit disproportionately from these errors of oversight and haste,” write the tax scholars.
While some problems arose out of oversights, I have to wonder if others were intentional. The less revenue the bill creates, the more Republicans can use that rhetorically to moan about the deficit and push for austerity. It’s not like anyone’s eager to fix the many glitches: The goal is to finish reconciling the House and Senate bills by Friday. And as Politico reported on Monday, “The Trump administration and Republicans in Congress are hoping to make the most sweeping changes to federal safety net programs in a generation, using legislation and executive actions to target recipients of food stamps, Medicaid and housing benefits.”
In this sense, a badly botched tax bill helps conservatives. They don’t want the government to be perceived as being flush with cash. They want as much money to leak out as possible—and they know that only the well-to-do can afford the accountants necessary to take maximum advantage of the tax laws. Subsequently, they respond by demanding cuts to social spending. The result is a huge transfer of wealth, from the poor to the rich. That’s the real game here.
According to my dictionary, a conservative is “a person who is averse to change and holds to traditional values and attitudes.” A radical, by contrast, is someone who advocates “thorough or complete change” and “departure from tradition.”
Which term best describes President Trump?
He’s a radical, of course. At every turn, Trump has challenged our most sacred national traditions: freedom of the press, the rule of law, independent courts, and more. He’s not a conservative; he’s the opposite of one.
So our own challenge, as citizens, is to defend those institutions. It doesn’t matter if we’re Democrats, Republicans, independents, or none of the above. We are all conservatives now, in a dictionary sense. Or, at least, we should be.
But we’re not acting like it. On the GOP side, people who would otherwise indict Trump’s reckless behavior have ignored it. The only Republicans in Congress who have firmly called him out Trump (Jeff Flake, Bob Corker) are on the way out; almost everyone else has turned a blind eye, holding their noses all the while. That’s not leadership; it’s cowardice.
Meanwhile, too many of my fellow Democrats have decided to fight Trump by questioning his legitimacy, not just his politics. And that bears an uncanny echo to Trump himself, who built his White House campaign on the “birther” lie about Barack Obama.
To be fair, I haven’t heard anyone suggest that Trump wasn’t born in the United States. But I’ve heard plenty of people proclaim that he’s not a “legitimate president”—to quote Representative John Lewis—because the Russians allegedly swung the election to him.
Wait, wasn’t it Trump who kept warning on the campaign trail that the election was “rigged”? John Lewis is a genuine American hero, and Trump is a charlatan, but Lewis’s remark was Trumpian to the core. Lewis doesn’t know whether Russian interference decided the election, any more than you do. Saying so—without real evidence—undermines not just our electoral institutions, but also the special prosecutor whom we have charged with investigating the Russian attacks on them. If we already know the answer, why bother to ask?
Ditto for the much-heard claim that fewer people voted for Trump than for Hillary Clinton, so she should be president rather than him. Under our Constitution, the person with the most electoral (not popular) votes wins. You might think it’s time to revise that practice; I certainly do. But there’s a system for changing it, which is outlined in—yes—the Constitution. Donald Trump became President under the process that has governed America since 1787. So if you say he is “Not My President,” you’re not just dissing Trump; you’re saying that the rules are illegitimate when they yield an outcome you don’t like.
Sound familiar? Again, it’s exactly how Trump operates. If a court rules against him, it’s because of the judge’s ethnic bias; if the FBI investigates him, it’s because the agency is in “tatters” and “the worst in history.” And so on.
Then there’s the Twenty-Fifth Amendment gambit, which might be the most Trump-like of all of the attacks on the president. Every time he slurs a word, the blogosphere lights up with claims that he is physically or mentally incapable of holding office.
Please. The charge is based on rumor and innuendo—Trump’s own signature idioms—rather than on fact. And it also undermines the Twenty-Fifth Amendment itself, which allows the vice president and a majority of the cabinet to recommend the removal of a president who is unable to function as one.
If you look at the congressional debate surrounding the amendment, which passed in 1967, you’ll see that it was designed to remove people who were clearly and unmistakably handicapped. Indeed, New York Senator Robert F. Kennedy—brother of the murdered president—warned that the amendment might become a weapon for removing leaders who were physically capable but politically unpopular.
Not to worry, replied Indiana Senator Birch Bayh. “We are not getting into a position ... in which when a president makes an unpopular decision, he would immediately be rendered unable to perform the duties of his office,” said Bayh, a central figure in the campaign for the Twenty-Fifth Amendment.
To be removed, in short, you had to be absolutely and unambiguously incapable—like John F. Kennedy was, in the hours between when he was shot and when he died. That’s what his brother and the other authors of the amendment envisioned. Stretching it to cover Trump ignores that history, which is—again—just the kind of thing that Trump does.
Enough already. Desperate times don’t call for radical measures; they call for conservative ones. So it’s time for everyone who loves our country—and its historic traditions—to rally behind them. If you don’t like President Trump, do what the Constitution says: Vote him out. (And if the special prosecutor finds that Trump colluded with Russia—or impeded justice—impeach him.) The best way to challenge Trump is to double down on our constitutional heritage, even as he makes a mockery of it. And the worse way is to imitate Trump, all in the guise of maligning him.
We will never know for certain who was the first person shot dead in the Americas. Visitors to Historic Jamestowne in Virginia can see the skeleton of a young man, no older than 19; the pellets of lead that killed him are embedded in a shattered knee. Known only as JR102C, the man’s identity has been the subject of much debate. The strongest theory holds that he was a dashing young military officer named George Harrison, and that he was killed in a duel.
But this European settler was hardly the first human being in the “New World” killed by a gun. Forensic scientists excavating sites in Peru have found at least one gunshot fatality nearly a century older, an Inca man shot through the back of his skull by a conquistador. He appears to have been a noncombatant, possibly executed after a 1536 uprising, his body dumped in a mass grave alongside those of women and children. Many of their remains show signs of mutilation and abuse. No one can even pretend to guess at his name or theirs.
These early cases of gun violence belong to a history of settler-colonialism and ethnic cleansing. As the writer and historian Roxanne Dunbar-Ortiz argues in her brilliant new book, Loaded: A Disarming History of the Second Amendment, America’s obsession with guns has roots in a long, bloody legacy of racist vigilantism, militarism, and white nationalism. This past, Dunbar-Ortiz persuasively argues, undergirds both the landscape of gun violence to this day and our partisan debates about guns. Her analysis, erudite and unrelenting, exposes blind spots not just among conservatives, but, crucially, among liberals as well.
These days, debates over the Second Amendment invariably turn on interpreting the connotation of “militia” in the stipulation that: “A well-regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” Liberals will often argue that gun ownership was always intended to be tethered to participation in institutions like the early Colonial Army or today’s National Guard. Conservatives tend to retort, in so many words, that “the people” were always meant to have guns as such, since an armed citizenry functions as a putative check on tyrannical government over-reach. When polled, a majority of Americans say they believe the Second Amendment guarantees an individual right to bear arms, regardless of participation in formal militias, whether “for hunting” or, as the Supreme Court ruled in 2008 District of Columbia vs. Heller, for lawful “self-defense.”
A distinguished scholar of Native American history, Dunbar-Ortiz dismisses these debates as a red herring. As she pointedly notes, at the time of the Second Amendment’s drafting, other lines elsewhere in America’s founding documents already provided for the existence of formal militias, and multiple early state constitutions had spelled out an individual right to bear arms besides. What the Second Amendment guarantees is instead something else: “the violent appropriation of Native land by white settlers … as an individual right.”
Our national mythology encourages Americans to see the Second Amendment as a result of the Revolutionary War—to think of it as a matter of arming Minutemen against Redcoats. But, Dunbar-Ortiz argues, it actually enshrines practices and priorities that long preceded that conflict. For centuries before 1776, the individual white settler was understood to have not just a right to bear arms, but a responsibility to do so—and not narrowly in the service of tightly regulated militias, but broadly, so as to participate in near-constant ad-hoc, self-organized violence against Native Americans. “Settler-militias and armed households were institutionalized for the destruction and control of Native peoples, communities, and nations,” Dunbar-Ortiz writes. “Extreme violence, particularly against unarmed families and communities, was an aspect inherent in European colonialism, always with genocidal possibilities, and often with genocidal results.”
The colonists’ use of guns was brutal. Drawing on the work of the military historian John Grenier, Dunbar-Ortiz describes how early colonists practiced what came to be known as “the first way of war” or “savage war.” Unlike war between “civilized” European nation-stations, in this mode of warfare Anglo settlers organized “irregular units to brutally attack and destroy unarmed Indigenous women, children, and old people using unlimited violence in unrelenting attacks.” Settler “rangers” pursued ethnic cleansing with the single-minded goal of depopulating land that they could then claim for themselves per a legal ownership scheme (“fee simple” land titles) that was considerably more advantageous than anything available to them on the European continent. Along the way they instituted practices like scalp-bounties.
This program of acquiring territory through genocide predated the settlers’ aspirations of independence from the British Crown. In no small part it even fueled it, since the Colonial government at times sought to restrain or at least control it. When Americans did establish independence, the program found expression in the Second Amendment, as Dunbar-Ortiz writes:
Although the U.S. Constitution formally instituted “militias” as state-controlled bodies that were subsequently deployed to wage wars against Native Americans, the voluntary militias described in the Second Amendment entitled settlers, as individuals and families, with the right to combat Native Americans on their own.
Although some readers will doubtless contest it, this is a critical intervention in debates over the Second Amendment. Instead of seeing its tortured language about the militia as a kind of archaic oddity, as something that must either be “updated” or explained away, Dunbar-Ortiz instead grounds the Second Amendment in something much bigger. She puts it at the front-and-center of the history of violence in America. This history encompasses far more than just the era of early colonization and the Revolutionary War, and Dunbar-Ortiz does not flinch from taking it on its full scope.
As Loaded proceeds, Dunbar-Ortiz traces the ways in which gun ownership has been the cornerstone of America’s growth into a “militaristic-capitalistic powerhouse.” In her account, guns are the reason that white people maintained control of the social order despite nominal changes in which parties or groups might claim power. For example, Dunbar-Ortiz notes how, in parts of the South before the Revolution, a class of armed white civilians was employed by the Colonial courts to serve as “searchers,” not just to track down fugitive slaves, but to detain freed blacks besides. Distinct from the formal militia, which was preoccupied with battling Native Americans, these “searchers,” subsequently known as “patrollers,” continued their work after the overthrow of the British, deploying a variety of tactics including the creation and printing of the first Wanted ads.
After the Civil War, these groups of armed whites morphed once more, continuing to harass and terrorize emancipated black Americans, becoming either Klansmen or police (or, not infrequently, both at the same time). For these foot soldiers of white supremacy, the titles and group affiliations might change, but their roles—and the centrality of guns to those roles—remained the same. Indeed, as Dunbar-Ortiz notes, many Confederate veterans publicly associated with each other long after the war through so-called “rifle clubs,” often barely-disguised fronts for Klan activity. Granted, at times, in such a short but information-packed book, accounts of such continuities may feel schematic; but at others, they can feel revelatory, as when Dunbar-Ortiz compares “savage war” to the rampages of contemporary mass killers. Throughout, and even when uneven, her narrative is devastating.
Loaded is also a story of many individuals. The trope of “the hunter,” for example, recurs frequently in current debates over guns, even though hunting is no longer the leading reason Americans give for gun ownership. Dunbar-Ortiz traces the common image of the gun-bearing hunter to the folk-hero image of Daniel Boone, the frontiersman whose exploits in Kentucky were the stuff of legend even during his lifetime in the eighteenth and early nineteenth centuries. Yet, as Dunbar-Ortiz observes, Boone’s celebrity was largely the work of another man, John Filson, a real estate speculator who wrote under Boone’s name. He simply wanted to encourage settlers to buy claims over land that was already heavily populated by Native Americans. So much for the image of the rugged American frontiersman, gun in hand, experiencing his primordial oneness with the wilderness, so beloved by gun rights advocates.
Likewise, Dunbar-Ortiz sounds the legacy of figures like pro-slavery paramilitary leader William Quantrill. Quantrill led a band of pro-slavery “bushwhacker” guerillas who carried out violent raids against pro-abolition communities in the Kansas and Missouri territories before and during the Civil War. In one instance, Quantrill and his men attacked Lawrence, Kansas, butchering some 160 civilians, including children. Yet as time passed, and it became expedient to forget and move beyond the violence of the Civil War, Quantrill and his men, who were famous for wielding six-shooter revolvers, became integrated into fuzzy legend of “the West.” They ceased to be seen as “bloody, murdering Confederate guerillas” and became “righteous outlaws.”
Some of these profiles may be richer and more-in-depth than others, but together they form a tapestry that is grim and compelling indeed. The right’s talk of preserving American greatness, Dunbar-Ortiz proposes, comes directly from this violent history. From Reagan’s race politics to Trump’s nativism, leaders on the right have articulated the principles that groups of armed American extremists practice. “White nationalists are the irregular forces—the voluntary militias—of the actually existing political-economic order,” she states, succinctly. “They are provided for in the Second Amendment.”
Among the stories Dunbar-Ortiz tells, is, fascinatingly, her own. As a Leftist activist in the 1970s, Dunbar-Ortiz participated in a “women’s study-action group” in Louisiana which was infiltrated by a government-affiliated spy, surveilled by police, and threatened by a member of the KKK. Desperate and “caught up in a current of repression and paranoia,” Dunbar-Ortiz and her comrades began to arm themselves, training with guns and eventually amassing a small arsenal. “We had fallen under the spell of guns,” she writes. “Our relationship to them had become a kind of passion that was inappropriate to our political objectives, and it ended up distorting and determining them.”
Dunbar-Ortiz eventually moved on from her phase of “gun love,” but the country has, of course, done just the opposite. Since the early ‘70s, the number of privately owned guns in American hands has nearly tripled, to well over 300 million. Meanwhile, American military forces are now deployed in some 180 countries, and our arms industry has achieved export levels and profit margins unprecedented since the end of World War II. Towards the end of Loaded, Dunbar-Ortiz presents American “gun love” as a quasi-religious phenomenon, bound up in a primal national myth of chosen-ness, victimization, and righteous violence.
It would be folly to hope that any single intellectual intervention, no matter how trenchant, could undo this template, or could reverse or slow this trajectory. And yet if we are to even imagine this possibility, we must have some sort of vocabulary to do so. As a portrait of the deepest structures of American violence, Loaded is an indispensable book.
Correction: An earlier version of this piece misstated where JR102C is on display and when Daniel Boone was alive. JR102C is at Historic Jamestowne and Daniel Boone lived in the eighteenth and nineteenth centuries.
Here is a very short history of Donald Trump’s long-standing war on the press: He has called reporters “dishonest,” “crooked,” “lying,” “scum,” “disgusting,” “crazy,” and “bad.” He has blacklisted news organizations and thrown reporters out of political rallies. “Good job,” he told Corey Lewandowski, his campaign manager, after Lewandowski forcibly ejected reporter Michelle Fields from an event, leaving her bruised. Trump has vowed to revise libel laws to make it easier to sue media companies—“Believe me, if I become president, oh, do they have problems,” Trump told his supporters—and he has suggested jailing journalists who print classified information. In July, he tweeted a video of himself body-slamming a man with a CNN logo superimposed on his face. The media, Trump has said, “is the enemy of the American people.”
These attacks are vile, and they have begun to undermine one of this country’s key democratic institutions. Recent polling suggests that nearly half of all Americans believe the media invents stories about Trump. Even more disturbingly, Trump’s anti-media crusade is making the job of reporting an increasingly dangerous one. Journalists are facing years in prison for covering the protests during Trump’s inauguration. At his rallies, reporters have been attacked with anti-Semitic slurs, alongside chants of “CNN sucks!” And in May, Greg Gianforte, then a Republican congressional candidate in Montana, attacked Guardian reporter Ben Jacobs, who was simply asking him a question about health care. Such developments led the U.N. high commissioner for human rights to warn in August that freedom of the press is “under attack from the president.” Trump’s statements, he said, could be interpreted as an incitement to violence against journalists.
Particularly troubling is the fact that Trump’s threats against leakers have already led the Justice Department to open three times as many leak investigations as were underway during Barack Obama’s final three years in office. Yet it was actually the Obama administration that began the aggressive war on whistleblowers. It used the Espionage Act to prosecute government employees who spoke to the media, it monitored reporters’ phone records, and it attempted to force journalists to reveal confidential sources. “The administration’s war on leaks and other efforts to control information are the most aggressive I’ve seen since the Nixon administration,” wrote Leonard Downie Jr., one of the editors involved in The Washington Post’s Watergate investigation, in a 2013 report on Obama’s media crackdown. Trump is simply continuing down a path Obama laid out for him.
In other respects, however, Trump’s war on the press differs from Obama’s in much the same way their presidencies differ: in their comparative abilities to get things done. In October, for example, Trump suggested that the government revoke NBC’s license to broadcast news, in retaliation for what he considered “fake news.” But Trump’s call to revoke NBC’s license is an empty threat. The Federal Communications Commission doesn’t base its licensing decisions on the content that is broadcast; it simply examines whether stations have complied with its rules. Networks such as NBC, moreover, do not even receive licenses from the FCC; individual affiliates do. In order for Trump to boot NBC off the air, he’d have to take on dozens of affiliates around the country—a purge that even Trump’s own FCC chair, Ajit Pai, has insisted he would not go along with.
Similarly, Trump’s promises to make it easier to sue news organizations by loosening U.S. libel laws is a complete nonstarter. The president has no power to change libel laws; that is a matter for the courts and state legislatures, and the issue is governed by more than 50 years of Supreme Court precedent. And while Trump has complained loudly and repeatedly about how the press has treated him unfairly, since he has become a politician, he has never followed through with a libel suit.
Such threats, in the end, are best understood as material to rile up Trump’s base and take the focus away from more pressing issues. It’s a technique Trump has deployed time and again throughout his presidency, to great success. Whenever the press has begun to focus on serious issues, Trump has been able to change the subject by lobbing attacks on the media—which rushes to defend itself from every slight. Trump threatened NBC, for example, on a day when he was facing intense criticism over his lackluster response to the humanitarian crisis in Puerto Rico, and just ahead of his announcement that he would decertify the nuclear deal with Iran and cut off the cost-sharing payments that undergird the Affordable Care Act. “One of Trump’s typical moves is to toss a bomb out of nowhere to deflect what is really bothering him, in the hopes that the press will be distracted,” says Doug Sosnik, who served as Bill Clinton’s political director. Trump has failed to enact any significant piece of his legislative agenda, but his strategy of attacking the press has been remarkably effective.
Trump is a master of manipulating the media. During his campaign, he generated an estimated $5 billion worth of free media coverage, simply by saying outrageous things. His presidency, too, has been a boon to the 24/7 news cycle; there are leaks and scoops by the hour, a seemingly endless torrent of breaking news. But in rushing to report on every new outburst and scandal, the press should be careful to avoid playing into Trump’s hand. His attacks on the press are serious. But the media can’t allow itself to be distracted by meaningless threats.
The Vanishing Princess is Jenny Diski’s only collection of short stories, recently published for the first time in the U.S. (it came out in the U.K. in 1995). In her introduction to the book, Heidi Julavits calls it “an artist’s sketchbook” for Diski, who, before her death in 2016, was better known for her ten novels, eight works of nonfiction, and reams of criticism for The London Review of Books. She seemed to always have been there, a ubiquitous voice that, as this collection shows, turns out to have been irreplaceable.
The Vanishing Princess finds Diski in reflective mode. The first story is about the eponymous princess, but its subtitle—“Or, The Origin of Cubism”—suggests something more than a modern fairytale. Once upon a time, a princess lived in a tower all alone. A soldier arrived and brought her food, which he liked to watch her eat. Then a second soldier came, and gave her other things she hungered for: a mirror and a calendar. Her appearance distressed her at first. The second soldier stood next to her, and “she watched his reflection standing next to hers and telling her, ‘That is you.’”
The first soldier came back. Seeing that another man had brought a mirror, he “took off a diamond ring he wore and … etched the outline of the princess’s reflection on to the glass.” When the second soldier returned, he made the princess stand before the mirror so that her reflection precisely filled out the first soldier’s outline. Then he, too, took off a diamond ring, and with its edge “drew around the reflection of her eyes,” then “filled in the lids, the pupils and the irises.” Now, within the outline, “a pair of eyes stared out.” They were “fixed in an expression of longing and alarm so poignant that the princess gasped. She could no longer see her own eyes when she looked in the mirror.”
The first soldier became more interested in looking at the eyes in the glass than watching the princess eat. “Now, on each visit, the soldiers added to the portrait in the mirror.” The princess’s original reflection “became no more than a frame, as each man added a feature according to his mood. An elbow was matched with the bridge of a nose; a wrist with a knee; a buttock curved beside an anklebone; one ear rested on a fingernail.” Finally, she couldn’t see herself at all. She disappeared.
In presenting a fairytale about three-dimensional representation, Diski focuses not on the person doing the looking, but the person being looked at. It is a story about being, not about seeing. Like almost every story in The Vanishing Princess—which, as Julavits puts it, are all bound together by “their feministly interrogative nature”—the tale takes up one aspect of feminine experience and inflates it into a beautiful and monstrous parable.
Let’s take Cubism as a cipher for the creative projects of men. Those projects’ origins, Diski says, belong less with real women than with their reflections, which are then elaborated into representational practices through games of careless tennis between communities of men. The original subjects of those works end up obscured even from themselves. As representation of the world takes on multiple dimensions in men’s hands, the level at which women can see themselves truthfully reflected is lost.
In retelling folk stories with new female perspectives, The Vanishing Princess’s closest ally is Angela Carter. Like Carter, Diski presents women who are living inside roles. She gives us the internal monologue of the gazed-upon woman. Some retellings are simply funny and sexy, like “Shit and Gold,” which reimagines Rumpelstiltskin. Rather than having to discover the imp’s name, the miller’s daughter becomes the queen by making him forget it. And she enjoys doing it.
In “Bath Time,” a woman gives up everything in her life for the single perfect experience: a bath. She invests every penny she has in a pristine white tiled room containing practically nothing but a white towel, a black bottle of bath oil, a sink. We never get to experience the bath itself, but at the story’s end we share with her the delicious bedtime of a woman who knows that the next day will be the greatest of her life. The principal idea here is of fulfilling desire, to the detriment of everything else in the universe, because it is all that matters.
Desire is found in other stories. “Housewife” is a deliciously lascivious story about the sexual connection between a housewife and her lover. Their correspondence is ridiculously florid, and Diski does a beautiful juggling act of celebrating a woman coming into new sexual consciousness while showing how silly and sweet human beings are when they’re doing that kind of thing. Her focus remains on the experience of the reflected self: “Filled with desire at her image and lusting dreamily for herself,” Diski writes of her heroine, “she watched her reflection slide her fingertips lightly down her body, from shoulder to breast, pausing to cup it softly and squeezing the nipple between thumb and finger.” Her “saturated labia” are “as silky and smooth as the satin covering her breasts, as slippery and wet as the liver she had donated to the cats.”
If many stories in The Vanishing Princess are about tapping into the hidden desire at the heart of the self, others are about its destruction by an outrageous world. The very minimal concept story “Short Circuit” describes a woman driven mad by her conviction that her partner is not telling her the truth about not telling her the truth. “Wide Blue Yonder” gives us a dissatisfied woman who makes the sudden and exciting leap into the “perfect pleasure” of drifting out to sea, away from her bad husband, towards her certain death.
For all the affirmation of female interiority in these stories, therefore, Diski also gives us a low hum of dread and perturbation. There’s a distant wryness to the narrative voice of The Vanishing Princess, one which uses fairytale cutouts to make everyday life seem both ridiculous and frightening. Madness and evil are as omnipresent as they are in fairytales, and they hide just out of our sight—only one thwarted desire, one misunderstanding away—in our own heads.
Everything’s coming up Dollar General. “The economy is continuing to create more of our core customer,” its CEO, Todd Vasos, recently informed The Wall Street Journal. “We are putting stores today [in areas] that perhaps five years ago were just on the cusp of probably not being our demographic and it has now turned to being our demographic.” Joyful news for Vasos, who earned $8.5 million in 2016, and for his company, which is enjoying rising profits. His cashiers, who make minimum wage, have less to celebrate. Dollar General says it’s courting customers who make under $40,000 a year. Its pay scale means it’s producing those customers, too.
This is not to make Vasos a scapegoat. In fact, he is more honest than most about what is happening in America’s free market economy. He is certainly more honest than the Republicans in power, who insist, with barely feigned conviction, that wealth will eventually trickle down. He understands that the moment we live in is advantageous to him, and that what is good for Dollar General is not necessarily good for workers. And he knows that the government wants to keep it that way.
The term “class war” is out of vogue. The Democratic Party doesn’t use it, wary of its Marxist heritage. Mainstream outlets avoid it, at least as a serious way to describe how elites are perpetuating their command of this country’s resources. When it is used, it is mostly to denigrate class agitation in the opposite direction: those on the bottom end who dare to criticize the wealthy. But the Donald Trump era has been clarifying in so many respects, not least in showing that the Republican Party, in league with the upper classes, is engaged in an all-out class war against the working and middle classes. In every area of policy—tax, environment, health, energy, even the management of the nation’s national parks—we have seen a sustained disdain for common people and an allegiance to the rich. It is class war, and they’re winning.
The paeans to trickle-down economics and limited government are belied by the hard facts. Severe poverty increased in 2016; the Pew Research Center reports that three in ten families in poverty made at least $15,000 below the poverty threshold. Rich families, however, are getting richer. “In 1963, families near the top had six times the wealth (or, $6 for every $1) of families in the middle. By 2016, they had 12 times the wealth of families in the middle,” the Urban Institute revealed this year. Social mobility, meanwhile, is decreasing. Data compiled by The Financial Times shows that in 2016, 30- and 40-year-olds earned significantly less than their parents did at the same ages.
The Republican tax bill, the latest and most successful expression of a long campaign, would exacerbate these trends. Even objective outlets like The Washington Post find that the Senate version of the bill—which slashes corporate taxes, allows significant exemptions to the estate tax, repeals the Affordable Care Act’s individual mandate, and weakens public employee unions—“is weighted to wealthier Americans.” As one conservative economist told Bloomberg, “It’s death to Democrats.” But it’s much more than that; it’s also an attack on the GOP’s own working-class voters, who Republicans hope will stick with the party despite higher taxes and stagnant wages and more expensive insurance because of the country’s other great battle, the culture war.
In terms of actual policy, however, Republicans are working for their obscenely wealthy donor class. They have drafted a bill that shifts wealth from one class to another and concentrates it, where it becomes a lasting testimony to the superiority of the few in contrast to the inferiority of the many. It becomes a value judgement, a determination of worth. And that value judgment can be seen in all areas of this Republican-controlled government.
The country’s social safety net is the party’s next target in 2018. “We’re going to have to get back next year at entitlement reform, which is how you tackle the debt and the deficit,” Speaker Paul Ryan recently told a local radio show. “Frankly, it’s the health care entitlements that are the big drivers of our debt, so we spend more time on the health care entitlements—because that’s really where the problem lies, fiscally speaking.” Entitlement reform means cuts to Medicare and Medicaid, which will help pay for the $1 trillion hole in the deficit caused by the tax bill.
Before the tax bill, there was Obamacare repeal. Long a staple of conservative campaign rhetoric, Republicans proved unable to marshal themselves well enough to carry out their years of threats. But their attempts are instructive—and not just because the policies they proposed are likely to pop up, gremlin-like, in bill after bill until Republicans are swept from office. There can’t really be any doubt that Republican leadership is indifferent to the welfare of its poorest backers, after a close and critical examination of their proposals to break down our already-mediocre health care system.
Consider the Graham-Cassidy bill, the party’s last sad grasp at victory: It would’ve cost 21 million people their health insurance, according to one study by the Brookings Institution, a figure well in line with other repeal attempts. The American Health Care Act would have uninsured 23 million; the Better Health Care Act, 22 million. Republicans wanted to starve Medicaid and kill the individual mandate, and when these policies proved to be dramatically unpopular they resorted to lying. It’s the nonpartisan Congressional Budget Office that’s wrong, Republicans say. As people with disabilities protested in the halls of Congress and were dragged out of their wheelchairs by Capitol police, wealthy donors were salivating over the tax breaks in the bills.
Health care law is a useful magnifying glass. Viewed through the lens of Graham–Cassidy, we can see how conservative policies entrench the existence of a vulnerable class. The Republican vision for health care reform implies sharp distinctions between lives with value and those without, and those distinctions map directly onto socioeconomic divides. In a debate over the future of the Children’s Health Insurance Program, Republican Senator Orrin Hatch insisted he wanted to save the program, which insures nearly 10 million children whose parents do not qualify for Medicaid. But he issued a telling caveat too: “I believe in helping those who cannot help themselves but would if they could. I have a rough time wanting to spend billions and billions and trillions of dollars to help people who won’t help themselves, won’t lift a finger and expect the federal government to do everything.” This is classic GOP rhetoric, splitting those who “deserve” help from those who allegedly do not. The less the poor take, the more we have to spare for our John Galts.
It’s more difficult to frame the Trump administration’s deregulation campaign in such stark terms. Deregulation, however, serves the same goal that conservative health policy and tax policy do: It empowers corporations and disempowers workers. Nearly as soon as Trump took office he signed an executive order requiring agencies to kill two regulations for every one regulation they implement. Trump is convinced that regulations are inherently bad, and that his order will result in some magically perfected regulatory regime. In practice, however, his deregulatory policy has come at the expense of average people. By May, Trump had used the Congressional Review Act 14 times to end regulations protecting clean water, funding for Planned Parenthood, and broadband access. More recently, his Department of Labor threatened to end a ban on tip-pooling in the restaurant industry.
Trump pushes deregulation for the same reasons he stocked his cabinet with some of the wealthiest people in the country. It’s hardly an accident that most of the individuals appointed to steer this deregulation push have deep ties to the industries they’re meant to oversee. A New York Times/ProPublica investigation uncovered 71 administration appointees with corporate ties, and 28 had potential conflicts of interest. They have every incentive to work on behalf of their former employers or themselves, and little reason to defend the interests of the public.
Trump’s menagerie seems determined to prove claims that he has turned the U.S. government into a kleptocracy. Consider Interior Secretary Ryan Zinke, who took government helicopters to private events and steered a $300 million contract to help rebuild Puerto Rico to a small firm in his hometown of Whitefish, Montana; or former Health and Human Services Secretary Tom Price, who resigned in September after reports that he billed the government for private flights. And let us not forget Treasury Secretary Steve Mnuchin, who took a government flight to Fort Knox so that he and his wife could watch the solar eclipse atop a dragon-hoard of gold. “It’s very hard not to give tax cuts to the wealthy,” he said in October. In every instance, we see an extraordinary degree of entitlement, alongside a nonchalance for how taxpayer dollars are used.
The administration is in the midst of undermining both the Consumer Financial Protection Bureau, the sole government agency tasked with protecting consumers from predatory financial practices, and the Environmental Protection Agency, which has struggled to protect Americans from the effects of climate change and industrial pollution thanks to the administration’s deep ties to the oil and gas sector. The CFPB’s new director, Mick Mulvaney, has called the agency a “sad, sick” joke, and has taken contributions from payday lenders. EPA Administrator Scott Pruitt is basically a glorified agent of the fossil fuel industry.
The administration’s deference to oil and gas companies, at the expense of the health and wellbeing of voters, can even be seen in its drive to suck up public lands. The idea of the commons means nothing to it. If there is land that can be drilled to make rich people richer, then the protections for the Alaska Wildlife Refuge will have to go. So long, Bears Ears.
“The modern American capitalist system is far from perfect. But for all its flaws, our system—and the digital communication channels it enabled—has delivered social justice more swiftly and effectively than supposedly more enlightened public bodies tend to,” Reason editor Elizabeth Nolan Brown argued recently in The New York Times. Brown’s premise—that free markets helped activists get celebrity sexual harassers fired—doesn’t stand up in an era marked by grotesque inequality. If anything, capitalism is exactly why these sexual harassers operated with impunity for so long.
The ongoing vulnerability of women and people of color is the deadliest proof that class war exists. Harvey Weinstein survived by placing financial pressure on his victims and offering incentives to everyone else to keep quiet; the markets that made him a millionaire enabled him to hire literal spies and sic them on Rose McGowan. Domestic workers, who are mostly women of color, recently told Splinter News that almost non-existent labor protections make it difficult, even impossible, for them to report sexual harassment or abuse. Other inequalities also demand an accounting: The Economic Policy Institute reported in 2017 that the median wealth of white families is 12 times higher than the median wealth of black families; when we talk about the wage gap, we’re talking about a phenomenon that predominantly holds back Latinas and black women.
Even the consequences of deregulation disproportionately affect people of color. The Trump administration’s hostility to climate science, and its friendliness to industry, will make it even more difficult for marginalized communities to survive. “We often forget that the choices we make on regulations affecting clean air, clean water, and enforcement are interconnected with the lives of our vulnerable communities and tribal populations,” wrote Mustafa Ali when he resigned from the Environmental Protection Agency in March.
Trump’s war on women and people of color stems from an irrefutable record of racism and misogyny. But it is facilitated by a specific economic platform. Trumpism cannot be destroyed without reckoning with its economic dimension. The risk it poses is best understood through the same terms that propelled the American labor movement to its early victories: The problem is class war, and the solution is a clear, vibrant left-wing platform. Instead, some Democrats ally with payday lenders and wrings hands over the deficit; others still denigrate single-payer health care as a futile dream. Would-be social reformers look to Silicon Valley and corporate donors for solutions, as if the people who are perpetuating our inequalities have it in their best interest to solve them. But egalitarianism is the great counter to Trump’s false populism.
“Don’t kill us, kill the bill,” chanted activists with disabilities in November. They protested not Obamacare repeal but the tax bill, because the consequences of both policies will be the same. There will be other bills, other policies; there will be more deaths, just to satisfy the wealthy. All wars have body counts, and class war is the same. We can at least admit we’re fighting one.
The key to survival in Donald Trump’s orbit is knowing that you’re really only ever performing for an audience of one. Lose his approval and trust and you’re out. When Kellyanne Conway says that “a lot of people in the mainstream media interfered with our election by trying to help Hillary Clinton win” or Stephen Miller shouts that Trump’s national security actions “will not be questioned,” they’re fulfilling their one true duty, which is as much speaking to the boss as for the boss.
These performances are usually done on television, but television has the advantage of being built for short-attention spans. Let Trump Be Trump, Corey Lewandowski and David Bossie’s new campaign memoir, gives the audience-of-one schtick a book-length treatment. This repetitive, sycophantic, and self-serving book, which is oddly written in the third person, is meant to ingratiate themselves to Trump—or “the boss,” as he’s referred to again and again—a man whose intellect, leadership, and stamina they praise for a tiresome 264 pages. The unintentional effect is a portrait of a temperamental, thin-skinned, and profoundly needy man at the helm.
Let Trump Be Trump is the first book to have been published by Trump campaign insiders, and early coverage has fixated on its gossipy tidbits, of which there are many. These stories have mostly fixated on Trump’s stomach-churning diet—Filet-O-Fish, Big Macs, Vienna Fingers, Oreos, gallons of Diet Coke—and Hope Hicks steaming Trump’s suit pants while Trump was wearing those pants.
There are a few items that could, I suppose, be described as newsworthy. Lewandowski describes the Muslim ban as being a cynical political calculation: “For us, the decision was simple,” he writes. “We wanted none of the other candidates to move to the right of us on immigration.” Lewandowski and Bossie both despise former campaign chair Paul Manafort and relish in the details of his firing, despite the fact that neither were technically part of the campaign in August of 2016. (Lewandowski was fired in June, although he remained on the payroll and in contact with the campaign; Bossie joined as deputy campaign manager in September.)
In their telling, Trump was horrified by Manafort’s shady dealings and ties to Russian oligarchs. When Trump became aware in August of a $12.7 million payment from a Ukrainian political party, he was aghast: “I’ve got a crook running my campaign,” he reputedly said, before ordering Steve Bannon to fire Manafort. Lewandowski and Bossie also allege that Manafort asked for a “$5 million check to be cut for a media buy that sounded vague at best.” (The campaign’s COO refused.)
Manafort alone gets two chapters in the book (including one titled “Thurston Howell III”), and it becomes evident that Lewandowski blames Manafort for his own firing. This personal animosity makes the Manafort sections in Let Trump Be Trump unique: They have a ring of truth. Manafort is portrayed as an amoral, manipulative, elitist jerk whose very existence runs in opposition to the grassroots, populist campaign that Lewandowski helped build.
But this is ultimately a happy coincidence. A reader would come away from this book with the impression that the campaign’s shady dealings were limited to Manafort. Michael Flynn is barely mentioned; Jared Kushner only has a bit part, as does Donald Trump Jr. Let Trump Be Trump largely sweeps Russian interference in the 2016 election under the rug. (However, the Manafort sections confirm the veracity of one aspect of the Steele dossier, which reported that Lewandowski, “who hated MANAFORT personally and remained close to TRUMP,” was foremost among the senior Trump officials pushing the campaign chair out.)
The majority of Let Trump Be Trump is devoted to two contradictory arguments. The first is that Lewandowski is the real architect of the Trump campaign and its success. The second is that Trump is a genius and the greatest politician in history. Let Trump Be Trump is the attempt to suture these ideas together, which culminates in the epiphany that the best way to win is to let Trump do whatever he feels like.
The problem is that pure, uncut Trump is intolerable, even by Lewandowski and Bossie’s lights. In one of the few honest moments in the book, they write, “Sooner or later, everybody who works for Donald Trump will see a side of him that makes you wonder why you took a job with him in the first place. His wrath is never intended as any personal offense, but sometimes it can be hard not to take it that way. The mode that he switches into when things aren’t going his way can feel like an all-out assault; it’d break most hardened men and women into little pieces.”
Trump’s temper tantrums are meant to exemplify his leadership qualities—that he demands greatness from others and goes ballistic when his high expectations aren’t met. In Let Trump Be Trump, Manafort’s underhandedness results in one such outburst, but so does Sam Nunberg making a convoluted order at Wendy’s. Lewandowski blames his own firing, in part, on sending Ivanka and Jared Kushner to an Iowa caucus site that lacked Trump volunteers or literature.
The sheer effusiveness of praise for Trump in Let Trump Be Trump is embarrassing. It also suggests a person who is desperate for constant, Stalin-esque praise from advisers who should be confronting him with difficult truths. Open up Let Trump Be Trump at random and you’ll find sentences like this: “Donald Trump’s mind works differently than most. His thoughts sometimes come out like pieces of a puzzle. It’s only later when you put the pieces together that you realize how much they’re worth. Sometimes the puzzle pieces form a masterpiece.”
Watching Trump vociferously defend his claim that John McCain was “not a war hero” because he was captured by the Vietnamese, Lewandowski realizes: “He had never seen a candidate have such courage in his convictions. No matter how hard the press pushed him, Trump wasn’t backing down from his words.”
Lewandowski and Bossie even suggest that Trump is a student of Jungian psychology:
Although the mainstream media and other haters give him little credit for his intellect, Donald Trump has more than a fundamental grasp on a surprising number of fields, including Jungian psychology. One of his favorite books is Memory, Dreams, Reflections, Jung’s autobiography. Steve Bannon insists that Trump came up with the idea of the names Lyin’ Ted, Little Marco, Low-Energy Jeb, and, later, Crooked Hillary, from his knowledge of Jungian archetypes.
If Lewandowski and Bossie had ever attended a creative writing class, they’d know the value of showing, not telling. Let Trump Be Trump, however, is all telling. Trump is a genius, we read again and again. He’s a “blue-collar billionaire” motivated by a desire to help cab drivers. He’s generous and big-hearted, which is why duplicity and back-stabbing upset him so much. Unfortunately, there’s very little in Let Trump Be Trump that backs up any of these claims—they’re just repeated until they become a kind of dull, obsequious wallpaper.
But providing genuine insight into the 2016 campaign isn’t what Let Trump Be Trump is about. Rather, it’s an attempt to curry favor with their former boss at a moment when both are on the outside. Let Trump Be Trump is a reminder that they’re still loyal—and still ready to serve—a 264-page “I want you back” letter. There’s nothing really in here of interest to Trump’s detractors or his supporters—there’s nothing here for readers, in general. It’s a book for an audience of one. Unfortunately for Lewandowski and Bossie, their intended audience isn’t much of a reader.
An unusual thing happened to me this week. A story that I wrote for The New Republic about the heated dispute over a proposed Costco poultry plant in Fremont, Nebraska, was published on the same day as another story on the same subject, written by Henry Grabar for Slate. It happens. Journalists know that certain microcosms reveal larger narratives; often we pick up the same frequencies. When I started to read the story, though, my concerns had less to do with duplication than a growing unease: Grabar had misjudged the essential story.
Grabar is obviously a serious reporter. He went to Fremont, but more than that: He visited local business owners (white and Hispanic), went to a city council meeting and a meeting of the local Tea Party group, talked to the local Democratic Party chair and the head of the United Food and Commercial Workers, interviewed the head of the subcontractor whose company would run the plant and a sociologist who researches meatpacking towns, visited the farm of a local environmentalist, and called a biologist who studies pollution in the Elkhorn River. He even visited the local Catholic Church. In short, Grabar wore down some serious shoe leather in Fremont. And what he got is interesting and factual and sensitive—and mostly misses the point.
Fremont, Grabar explains, like so many small towns in America, is haunted by the memory of a more prosperous past and is trying to find a way forward. Costco’s investment “could save the town,” he writes, if it welcomes “a plant that is all but certain to bring hundreds more immigrant and refugee families to town.” And yet resistance to Costco persists. Grabar allows Mayor Scott Getzschman to distill the problem: “There’s some people that, regardless of what you do, it’s change, and they don’t want change, period.”
So the situation in Fremont, as Grabar sees it, is that Costco represents progress, in the form of cultural diversity and economic growth, while the townspeople resist out of a mix of racism and dislike of change. That’s certainly how the city government, local economic development board, and the corporate executives at Costco would like you to see it. Grabar largely accepts this view, possibly because it promises a Fremont that looks a little more like the world he knows and understands. “Henry currently lives in his native New York City,” his website bio reads. Not that New Yorkers can’t write about Nebraska, but here, that particular worldview colors the story.
At one point, he theorizes that “rural assimilation can be easier” for immigrants from Latin America, because towns like Fremont “are more similar” to the homes that they have left behind “than frenetic urban neighborhoods like Boyle Heights, Los Angeles, and Elmhurst, Queens.” In lumping Latino immigrants into one group and assuming they have one background, Grabar winds up essentializing and reducing the very diversity that he means to celebrate. It’s true that the first waves of immigrants to Fremont came largely from Chichihualco, a tiny mountain village in Guerrero, Mexico, and the central highlands of Guatemala where the Maya speak K’iche’. But neither especially resembles Fremont, Nebraska. More than that, the groups who would likely work at the Costco plant come from even more disparate immigrant groups: Karen people from Burma, many of whom come from camps along the Thai border, and Somalis from war-torn Mogadishu by way of the Dadaab Refugee Complex in southern Kenya. Will their assimilation also be easier in Fremont, a town that rejected its population of undocumented migrants, than it would be in Queens?
People in Fremont live within an hour’s drive of Nebraska’s two largest cities and virtually all of its major cultural institutions. Plenty of people head to Lincoln for Husker football games or to Omaha for the College World Series. They go to concerts at Century Link Center and Pinnacle Arena. They shop at the Westroads Mall in Omaha’s western suburbs and the Nebraska Crossing Outlets in Gretna. And all the while, they are watching Fox News at home, listening to Fox News radio affiliates in the car, and scrolling through Facebook on their phones.
In short, they’re much like people everywhere else in this country—except that many of them have chosen to live in Fremont not only for its proximity to modern comforts but also its remove from modern diversity. Which is to say: Fremont should not be treated like some uncontacted tribe in the Amazon but rather like an apocalyptic cult whose members are threatening suicide rather than live in a world they now denounce. Expecting to see the positive effects of more diversity in a town that has already spent a decade affirming and reaffirming their rejection of immigrants by legal means, even if it comes with stiff financial costs, is to completely misjudge how deeply ingrained this kind of racism is and, worse still, to misunderstand where it comes from.
Grabar says that immigrants have “changed the town’s identity.” I would argue that the latest wave of immigrants has merely revealed, yet again, Fremont’s complex response to outsiders of all stripes. The city shut down German-language newspapers and banned speaking German in public during World War I. Midland College, founded by German Lutherans in Kansas, was also welcomed to Fremont in 1919. Fremont was one of the centers of KKK activity in the 1920s; the town’s state senator also introduced anti-Klan legislation. The Hormel union provided economic stability for Fremont’s workers; one of its founders also explained that the union was formed when Hormel “hired 40 niggers,” so “we got clubs ... and we run them out.” And, of course, Fremont has seen its ethnic diversity boom in the last 25 years, and the town’s people responded by passing an anti-immigrant ordinance.
Accepting Fremont’s whiteness as a state of nature rather than a constructed and enforced reality is to accept the racist myth of rural white homogeneity. In fact, the demographic makeup of Fremont, as the next stop up the Platte River from Omaha, is and has always been much more in flux than most rural places. That’s why there are conflicts. To make some generic call for “change” plays into the false narrative that they haven’t always been changing and stokes the Middle American paranoia that the coasts are judging and trying to “correct” them—when, in fact, the most recent throes of racist fervor in Fremont is anything but homegrown.
Grabar, in discussing Fremont’s ordinance, never mentions that Bob Warner, the city councilman who first proposed the city’s anti-immigration ordinance, got the idea after seeing a story about an immigration raid in Postville, Iowa, on Fox News. Grabar never mentions that the ordinance was authored by Kansas Secretary of State Kris Kobach, who became a key anti-immigration advisor to Donald Trump’s campaign and is now the head of Trump’s commission bent on national voter suppression. Grabar doesn’t mention that Fremonters who oppose the plant over worries about bringing in Somali workers almost universally talk about what they have seen on Fox News or Breitbart or Facebook.
Proposing that that kind of fomented rage can be overcome by simply bringing in thousands of Somali neighbors is not only naïve but irresponsible. When Tyson brought hundreds of Somali workers into Emporia, Kansas, in 2006, the town was seized by anti-Muslim bigotry. The wild rumors and threats of violence grew so fevered that Tyson finally closed the plant and relocated workers to Garden City, Kansas, in 2007. Last year, as I noted in an earlier story for The New Republic, the FBI alleged that a group of white supremacists from nearby Liberal, Kansas, fueled by growing anti-Muslim and anti-immigrant rhetoric, plotted to blow up the apartment complex where many of those same Somali workers live with a massive Timothy McVeigh-style truck bomb.
The problem, in other words, isn’t that Fremont is disconnected from the world and is just in need of some diversity; the problem is that it is all too connected—but to false narratives. To champion meatpacking companies as agents of cultural diversification is both to give them undue credit and to put the workforce that they already exploit at further risk.
One last thing. On Wednesday, on Twitter, I signaled my intention to raise these kinds of issues and to encourage Henry Grabar to enter into conversation with me about how all journalists can do better. That goes for me, too. I welcome Grabar’s, and anyone’s critiques of my work. (This includes Slate’s own Jordan Weissman, who tweeted at me, “calling bullshit,” on my criticism of Grabar’s article. “Darting in, pulling rank as a local . . . is crap,” he wrote, adding that I was “pompous” and “deeply lame.”) As much as I believe that Grabar is blinkered by a remote worldview, my own understanding of Fremont is perhaps colored by too much familiarity.
After covering Fremont’s continuing immigration disputes for more than seven years now, I often feel a kind of despair at telling the same story again and again. Fremont-native Blake Harper, who I interviewed for my book The Chain (about one-third of which is set in Fremont) tweeted to me: “I love your work, @TedGenoways. But I’m burnt out. I don’t think I can handle yet another story about rampant stupidity in Fremont.” I know how he feels and that, too, is a kind of bias.
But when Grabar writes that “deregulation and factory farming” are “bringing good news” to towns like Storm Lake, Iowa, and Garden City, Kansas, it makes people like me wonder. Does he not know that the editor of the Storm Lake Times won a Pulitzer Prize this year for his coverage of issues surrounding the devastating effects of nitrate pollution from cropland on the town’s drinking water?* Does he not know that Garden City was nearly the site of a horrifying anti-Muslim terrorist attack? When he writes that this “good news” is especially visible in Schuyler, Nebraska, does he not know that the high school’s nearly all-Hispanic basketball team spent its entire season last year facing opposing crowds who waved Trump campaign signs and shouted racist slurs and chanted “Build that wall!,” all through their games?
When Donald Trump was elected president, the national news media committed to more coverage of the “heartland” areas that put him in office. I’m pleased to see Slate undertaking long-form narrative pieces about a place like Fremont. But if you really want to understand what’s happening in the so-called Real America, you have to be able to put aside coastal biases and take a long, hard look.
Correction: *This article originally stated that the Storm Lake Times won a Pulitzer prize for its coverage of water pollution caused by hog barns.
Once upon a time, a certain kind of fairy tale goes, the most interesting and articulate minds in the world converged in dank Manhattan apartments to debate the merits of literary fiction and Trotskyist politics over cocktails. This is the story of the New York Intellectuals, of the minds and temperaments behind Partisan Review and eventually The New York Review of Books. There isn’t any real magic in this bit of cultural history, but there is fairy dust to be found in it all the same. For a certain kind of person in America, the New York Intellectuals had so much: vigorous debate, devotion to literature and ideas, and suitably shabby homes in the tasteful precincts of Manhattan and coastal Maine.
Elizabeth Hardwick lived in one such apartment, on West 67th Street, as few reminiscences of her fail to mention. She lived the exemplary midcentury literary life, publishing a few well-regarded novels and a substantial pile of extravagantly praised essays, later becoming a founder and editor of The New York Review of Books. Her essays, now reissued by NYRB Classics, are specimens of impeccable taste. Her prose is ornamented but not ornamental. Adjectives come to her in artful trios. Sentences hum with an energy of their own, even trill a little, but only within the bounds she prescribed. And even though she often addressed unruly subjects—Zelda Fitzgerald’s “sad, wasted life” or the “violent self-definition” of Sylvia Plath—somehow her approach in prose was always proper, always carefully calibrated, even genteel.
It was that sense of decorum, I think, that John Leonard had in mind when he praised the “brilliant domesticity” of her style, a phrase one could characterize as sexist. (It is hard to imagine even the softest-spoken of male writers being complimented for their “domesticity.”) Her admirers often remarked the elegance of her put-downs, what Hilton Als once called her “frankly intimate tone.” On the page, the person you meet is someone whose erudition is intimidating and presents itself both as effortless and admonishing. Every major American work of literature is right there at her fingertips: all of Melville, all of Wharton, all of James. And she seems to know everything not just about great writers’ work but about their lives, too. Did you know, for instance, that Ralph Waldo Emerson and Margaret Fuller were the best of friends? No? Hardwick makes you feel you should.
And yet it must have been very hard to actually be Elizabeth Hardwick. Her marriage to Robert Lowell in 1949 brought her both transcendent passion and abject disaster. She spent many years playing his nursemaid, as he was repeatedly committed to mental institutions, and bearing his infidelities as a function of his madness. Perhaps worse, she was in her professional life that double-edged thing, a writer’s writer. She lived in a welter of literary gossip, surrounded by people who managed, by most measures the world cared about, to do more than she did: to write more books, win more awards, attract more readers. Mary McCarthy, Joan Didion, and Susan Sontag all counted as her friends, though she did not become as famous as they did. She managed, somehow, to present her secondary status as evidence of more seriousness. There is always something slightly vulgar, to intellectuals, about worldly success, and Hardwick benefited from the idea that the best fiction, the best criticism truly thrive at a slight remove from the masses.
Hardwick’s life and work are full of dualities like these. For example, the joke the literary profiles liked to tell about Hardwick is that she came from Kentucky, a southern belle through and through, and yet arrived in New York with the stated intention of becoming a Jewish intellectual. This was in fact a joke Hardwick made many times herself, including in her 1977 novel, Sleepless Nights. She tried to qualify her intent in an interview with the Paris Review in 1985. “What I meant was the enlightenment, a certain deracination which I value, an angular vision, love of learning, cosmopolitanism, a word that practically means Jewish in Soviet lexicography,” she explained to the novelist Darryl Pinckney.
One might take issue with this: The deracination of midcentury Jewish intellectuals may not have been strictly chosen in the way Hardwick’s self-exile from the South surely was. But maybe, on another level, they had something in common: “Many are flung down carelessly at birth and they experience the diminishment and sometimes the pleasant truculence of their random misplacement,” she wrote in Sleepless Nights. Even though her whole family was from there she always felt, in Kentucky, like one of the “imports, those jarring and jarred pieces that sit in the closet among the matching china sets.”
Hardwick’s relationship to her own beginnings, however, fueled some of the best writing in this new volume of her essays. When in 1965 she went to Selma to cover the voting rights marches for The New York Review of Books, she wrote with personal stakes in the future of the South, more so than one usually found in her other work. Would its sense of isolation and exceptionalism deepen as white Southerners dug in to racist policies, or could the civil rights movement bring about a new era? She meets “a poor young man, a native of Alabama, in a hot, cheap black suit,” and he tells her he is made “right sick” by the “white folks mixed in with the colored.” Another sort of writer would merely have recorded this, perhaps added some editorializing about how awful it was. Not Hardwick. She intervened in the case:
And what could one answer: Go to see your social worker, find an agency that can help you, some family counsellor, or perhaps an outpatient clinic? I did say, softly, “Pull yourself together.” And he too shuffled off, like the convicts, his head bent down in some deep perturbation of spirit.
That “pull yourself together” was no doubt all the more devastating because it was quietly delivered, a judgment made not quite in kindness but with cutting finality. This is, too, a time-honored technique in the South, the “bless your heart” form of shutting people down. Hardwick was plainly a lot more Southern than she thought.
Once in New York, Hardwick claims to have meandered a bit, living with a quarrelsome gay roommate and going out to jazz clubs to see performances by, among others, Billie Holiday. But she quickly found a place among the fabled set of Partisan Review, a small-circulation magazine that somehow made itself into a legend. It was run and written by a number of men and Mary McCarthy, the novelist and critic, who always said, with more good humor than apology, that she was originally invited to join the journal only because she was dating one of the editors, the caustic Philip Rahv. By the time Hardwick met her in 1945, McCarthy was a celebrated figure in the group. In 1942, she had published The Company She Keeps, a book that pioneered the now oh-so-familiar story of a Young Literary Woman making her way in New York. The book was widely praised and also widely understood to be autobiographical. Hardwick always found McCarthy’s fidelity to the facts of her life in fiction curious. “If one would sometimes take the liberty of suggesting caution to her, advising prudence or mere practicality,” Hardwick wrote in her introduction to McCarthy’s Intellectual Memoirs: New York, 1936–1938, published in 1992, “she would look puzzled and answer: But it’s the truth.”
This was actually the third bit of writing Hardwick would publish about McCarthy, and the first she would publish after McCarthy’s death in 1989. Though Hardwick is often presented as one of McCarthy’s best friends, in reality the bond was prickly. One of its most public stings came when, just as McCarthy’s biggest popular success, The Group, topped charts in 1963, Hardwick published a parody of it. She wrote it under the pen name Xavier Prynne and published it in The New York Review of Books, which virtually guaranteed that McCarthy, a close friend of the editors, would find out who it was:
His clothes were thrown over the chair. Shamefully, she peeked. The label said, simply, MACY’S. She stroked his back, gently, and lay quietly wondering until suddenly, appalled, she felt violently hungry. In her slip she went to the kitchen and opened a can of Heinz Tomato Soup. Carefully she flavored it with a dash of stale curry powder. What she really wanted was a glass of pure, fresh milk, but the soup restored her tremendous Middlewestern energies and she decided to walk home, even though it was after midnight.
This piece, which appeared under the title “The Gang,” does not appear in Hardwick’s Collected Essays, but the small fracas over it reveals something of Hardwick’s character. When she first read the novel, she had actually written to McCarthy in vague praise of it. “It is very full, very rich,” was the sort of compliment Hardwick offered. Most experienced writers would, I think, recognize that sort of comment as a friend offering positive thoughts on a book they hadn’t responded to. Still, to follow vague praise with the publication of a brutal parody was on another level.
Anger? Jealousy? Contempt? It is hard to say exactly what Hardwick was feeling when she decided to publicly mock her friend’s work. She later wrote McCarthy an apology. “It is hard to go back to the time it was done, but it was meant as simply a little trick, nothing more. I did Not mean to hurt you and I hope you will forgive it.” McCarthy did, in time, perhaps out of a sense that the bigger fish should be generous to her less successful counterpart. Or even out of eventual sympathy with Hardwick’s misstep: McCarthy had, herself, a few run-ins with people she had savaged, by design or accident, in her own books. But it strains credulity not to suspect that Hardwick knew that this would upset her friend, and did it anyway.
No model of sisterhood or solidarity, of course, ever appealed to Hardwick. Toward feminism, and particularly its second wave, she was only intermittently sympathetic. She found Simone de Beauvoir’s The Second Sex almost ridiculous—in scope, in argument, in execution. The “almost,” of course, is key. She praised Beauvoir for being neither “a masochist, a Lesbian, a termagant, or a man-hater” and for having a “nervous, fluid, rare aliveness on every page.” But ultimately she thought there were natural differences between men and women: “A woman’s physical inferiority to a man is a limiting reality every moment of her life,” she argues.
It is only the whimsical, cantankerous, the eccentric critic, or those who refuse the occasion for such distinctions, who would say that any literary work by a woman, marvelous as these may be, is on a level with the very greatest accomplishments of men.
Hardwick gave herself some breathing room, herself arguing about the available published evidence of literary greatness, and not necessarily women’s abstract capacity to achieve it, though her dismissiveness of Austen, the Brontës, and George Eliot is unequivocal. It is also worth noting that this essay was published in 1953, and that Hardwick would go on writing—and writing specifically about women and literature—for another 40 years. Still, if this conviction is where the pistol fired on her own efforts as a writer, one has questions. After all, for all this protestation, we are still reading Hardwick in a way that we are not, really, reading Irving Howe. Or Paul Goodman. Or any of the innumerable very-good-but-not-quite-eternally memorable critics. And that has happened, in part, because there is a renewed interest in this question of women’s contributions to what might be called the American literary tradition.
It’s also true that most of Hardwick’s transcendent literary efforts after the Second Sex review emerged from her analyses of women. One of the shames of this volume is that it does not include all of them, perhaps because many have been published in another volume reprinted by NYRB Classics, called Seduction and Betrayal. So you will not find her rather unsentimental appraisal of the Fitzgeralds, in “Caesar’s Things”; nor her work on Sylvia Plath, in which, with 20 years more in the writing trenches, she concluded that, “Every artist is either a man or a woman and the struggle is pretty much the same for both.” In those pieces she had the knack for illustrating what might have been called feminist themes by way of specific details of specific lives. In Plath, for example, whose life quickly became for so many critics a parable about mental health and marital trauma, Hardwick saw no “general principles, sure origins, applications, or lessons”—a quality that might very well be found in Hardwick’s own writing. Hardwick had been saved from such brutalities, too, by not ever being elevated to the position of “feminist icon,” as Plath was. She continued to be read for her work rather than for details of her tumultuous personal life.
It is the variety of terrain they covered that makes Hardwick and the other midcentury women writers so engaging. These women wrote novels, they wrote memoirs, they wrote criticism, and they wrote about politics and current events. The eclecticism of their careers fueled the romance of the New York Intellectuals; they left not just a body of work but a way of life. Such a breadth of subject matter is less fashionable today, when, with some exceptions, writers tend to specialize. We have novelists and intellectuals, but few novelist-intellectuals or intellectual-novelists. Memoirists tend to stay in a lane, too, and nonfiction is virtually severed from fiction. Everyone has specialized. It’s often claimed that the rise of this or that genre—specifically the personal essay—has helped women writers rise like cream, but ironically it has narrowed the aspect ratio, too. Somehow the implication in all those think pieces is that women feel and think differently from men, which does nothing but keep us from exploring other realms of possibility. Yet Hardwick managed to do all this as a woman writer in what we imagine to be a harder time for them.
Hardwick has the reputation of someone who would not like such claims made about her. “Woman writer? A bit of a crunch trying to get those two words together,” she told Pinckney when he interviewed her. But then she also backtracked. “I do feel there is an inclination to punish women for what you might call presumption of one kind or another,” she said. The presumption that they might lay claim to the same freedom as men, to write as grandly as men do, must have been what she meant. And though all her life she seems to have lived with a sense of modesty about her talents, insisting that Lowell was the actual genius in the family, there was, perhaps, in there a final duality. Sometimes, she recognized, it’s best not to be the biggest luminary in the room.
Minnesota Senator Al Franken on Thursday heeded the demands of his Democratic colleagues in announcing he would resign “in the coming weeks,” but he did so with some defiance. “Some of the allegations against me are simply not true,” he said. “Others I remember very differently.” Many of his admirers now portray him as a fallen idol who let down the liberal cause. “It hurts not just because his supporters thought he was better than this, but because so many people were depending upon him,” wrote ThinkProgress’ Ian Millhiser. “He was supposed to be a progressive champion.”
But even as his own side buries him, Franken has found supporters in the most unlikely corner: Some conservatives and libertarians are defending him as a victim of a puritanical witch hunt.
On Fox News, Laura Ingraham described the campaign against Franken and other men accused of sexual harassment as a “lynch mob.” Her guest, former House Majority Leader Newt Gingrich—a man with some expertise in political sex scandals—agreed. “These are people who grew up in a party which used to preach free love, which used to think that all of the hippiedom was wonderful, who used to think they were somehow representing the future,” Gingrich complained. “And now they have suddenly curled into this weird puritanism which feels a compulsion to go out and lynch people without a trial.”
While it’s easy to dismiss Ingraham and Gingrich as cable-news windbags, similar sentiments were expressed by two more thoughtful, independent-minded analysts on the right: conservative Washington Examiner columnist Byron York and the libertarian pundit Cathy Young. “The #MeToo moment has turned into sexual McCarthyism,” Young lamented in The New York Daily News, arguing that unlike other men caught up in sexual predation scandals Franken was guilty of relatively minor and disputable sins. On Wednesday, York tweeted:
Al Franken’s right-wing defenders are not monolithic. Ingraham and Gingrich clearly have cynical motives: They need to discredit the campaign against sexual harassment in order to defend Republican transgressors like Senate candidate Roy Moore and President Donald Trump. “So I’ll tell you this tonight, be weary of the lynch mob you join today,” Ingraham argued. “Because tomorrow, it could be coming for your husband, your brother, your son, and yes, even your president.” The logic here is clear: We have to defend Franken today so we can defend Moore tomorrow.
Neither York nor Young are so nakedly partisan. Rather, they’re working from principles that compel them to put anti-feminism above fealty to the GOP. Young has long been a critic, on occasion an incisive one, of what she sees as the authoritarian tendencies of feminist opposition to rape culture. Yet Young’s defense of Franken shows the weakness of her approach to the subject, which is to micro-litigate the details of each accusation. In her column, she writes:
For instance: while Franken’s suggestion of accidental slippage while putting his hand on a woman’s waist has been widely derided, Canadian TV personality and entertainer Liana Kerzner, whom I recently interviewed about the #MeToo moment-and who has done numerous photo opportunities-believes it’s quite possible. Given the number of photos for which Franken has posed, one would expect the accusations to be in the double digits if he was a serial groper.
She’s arguing that it’s possible for an alleged groping to be an accident, which is true enough. But from that hypothetical, Young arbitrarily demands accusations “in the double digits” before we believe that Franken is guilty. So the fact that there are only eight accusations against Franken (so far) is taken as indicating innocence. Similarly, York’s nostalgia for the due process of yore is hardly reassuring when we remember that other notorious predators who served alongside Packwood, like senators Ted Kennedy and Strom Thurmond, got away with numerous offenses (which, in Thurmond’s case, included groping a fellow senator).
Partisanship is often regarded as a cancer in American politics. But there is little reason to celebrate a bipartisanship that violates party loyalty simply to minimize sexual harassment.
New York, NY (December 7, 2017) — Trump’s first year in office has been riddled not only with scandal, but with constant attacks on American democracy. Criticized for undermining democratic norms, Trump is far from the first major U.S. political figure to do so.
In “How A Democracy Dies,” Harvard professors Steven Levitsky and Daniel Ziblatt explore the long history of attacks on American political institutions. One way to break a democracy is “not at the hands of generals, but of elected leaders who subvert the very process that brought them to power.” Despite having constitutions in place and institutions that allow people to vote, leaders in countries such as Venezuela, Georgia, Peru, and the Philippines have been able to subvert the word of law, allowing extremist demagogues to gain power. “The tragic paradox of the electoral route to authoritarianism,” write Levistky and Ziblatt, “is that democracy’s enemies use the very institutions of democracy—gradually, subtly, and even legally—to kill it.” With Trump’s constant attacks remaining unchecked, can the system survive?
When a small town in Nebraska is given an opportunity to build Costco’s new chicken processing plant, environmentalists in the area were faced with a fundamental dilemma: try to convert people to their way of thinking or co-opt a xenophobic cause they don’t believe in to shut down the project? Ted Genoways provides an indepth look into this dilemma, following retired construction manager turned environmentalist Randy Ruppert. With an eminent lack of support, Ruppert works alongside two unlikely individuals, well-known xenophobe and opponent of undocumented labor John Wiegert and Tea Party leader Doug Wittman.
In August 2016, a small insurgent group known as the Arakan Rohingya Salvation Army led an attack on Myanmar’s police that resulted in the killing of 12 officers. The aftermath of this attack—led by the Myanmar’s military alongside Rakhine Buddhist vigilantes—saw the burning of over 300 villages in a months-long program of killing. “Myanmar’s Imagined Jihadis” provides insight into the ethnic cleansing of the Rohingya in Myanmar by the government. Interspersed with heartbreaking accounts by Rohingya who have successfully fled, Jason Motlagh describes how Myanmar’s history of anti-Rohingya policy turned an attack on police into a major humanitarian crisis. Using the rhetoric of the global war on terror and Islamophobia in the United States, Myanmar’s government has been able to justify their killings under the guise of combating terrorism.
Simone Tramonte’s photo essay, “Small Acts of Subversion,” provides a look at Iranian women 40 years after the fall of Shah. Following the Iranian Revolution, the ruling cleric in Iran rolled back the Shah’s 1967 and 1975 family protection laws, raising the minimum wage women could legally marry and allowing them to petition for custody of their children but many of the regime’s stifling rules still remained. What Tramonte shows through her photos is how Iranian women navigate in the world around them—a woman in black chadors watching children play in a fountain while in another, women learning to drive, and a group of women gathered for a party on a roof. As Dina Nayeri points out in her introduction, given Iran’s theocracy “women have learned to carve out a life for themselves inside the hijab.”
In Up Front, Rachel M. Cohen details how the current battle over the Fair Housing Act could transform U.S. politics. The landmark anti-discrimination law turns 50 years old in 2018, but efforts to desegregate inner cities continue at a frustratingly slow pace, and the Trump administration is looking to undercut the gains made during the Obama years. The outcome of this fight, Cohen writes, could help decide the country’s political future. Today’s true electoral battlegrounds are the suburbs, which have grown increasingly diverse in recent years—a shift that fair housing policies will only accelerate.
Isaac Stone Fish lays out a potential solution to the current crisis with North Korea: a presidential summit between Trump and Kim Jong Un. The idea might seem preposterous—but, then again, so too was the idea of Nixon going to China before he did so in 1971. Nixon’s trip, Stone Fish writes, offers a helpful precedent for Trump, “a president who needs both a solution to the crisis with North Korea, and a foreign policy win to distract from his own mounting scandals at home.”
Staff writers Sarah Jones and Alex Shephard are also featured in Up Front this issue. In “Turning Pro-Life Blue,” Jones examines the current debate within the Democratic Party over whether it should have a litmus test for its candidates on the issue of abortion. Such purity tests, Jones argues, do the party no favors. To truly make inroads in red states and conservative communities, Democrats need to figure out how to connect with voters who don’t necessarily share their views. Fortunately, she finds, abortion might not be the wedge issue of legend.
In “Paper Tiger,” Shephard calls on the media to stop getting distracted by Trump’s attacks on them. Certainly, Trump’s war on the press has is troubling, but many of his threats are empty ones. And by rushing to defend itself against every slight, the media plays allows Trump to shift the focus away from important issues.
Jon Wolfsthal and Kimberly Marten contribute Columns for the January/February issue. Wolfsthal—who was senior director for arms control and nonproliferation in the Obama administration—writes that Trump’s undisciplined rhetoric and changes to U.S. nuclear policy are increasing the risk of nuclear war. Marten, the director of the Program on U.S.-Russia Relations at Columbia University, writes about how she was scheduled to advise the U.S. State Department—before Rex Tillerson refused to sign off on her appointment. The current U.S. strategy with regards to Russia, particularly its focus on sanctions, Marten writes, won’t prevent Russia from interfering in future U.S. elections. Instead, the U.S. should take its cues from the Cold War and seek to limit confrontation with Russia—including by forging a limited cyber accord.
Although Martin Luther has long been cited as being the sole figure who launched what became the Protestant Reformation, many dispute this. The Luther Legend, as Marilynne Robinson writes, completely goes against how history works. Robinson’s “The Luther Legend” calls out the predecessors and prominent figures of the time who laid the foundation for Luther’s dissent. “...the idea that one man and one set of events shattered a great order and brought down chaos, minimizes a terrible continuity in the airs of the continent before and after the Reformation.”
In “Settling for Scores,” Diane Ravitch delves into the faults of standardized testing analyzed by Daniel Kortez in his new book, The Testing Charade: Pretending to Make Schools Better. Kortez breaks down the negative consequences of the education reforms of the 2000s and the lasting impact Bush’s No Child Left Behind Act has had. As Ravitch writes, NAPL “vastly expanded the federal government’s role in education, venturing where no other administration of either party had dared to go, it set a goal that was literally impossible for schools, districts and states to meet.”
With the release of The Collected Essays of Elizabeth Hardwick, Michelle Dean’s “The Company She Kept” explores how Hardwick’s relationships with New York intellectuals led to some of her best writing. Rachel Syme reviews Errol Morris’s new crime docudrama Wormwood. While popular docuseries like The Jinx and Making a Murderer are heavily focused on the mystery surrounding murder, Wormwood follows the mourning of a grieving son consumed with finding out the truth behind his father’s death. Christian Lorentzen reviews Austrian director Michael Haneke’s new film Happy End. Known for his bleak and violent films, Haneke’s latest work examines the effect of parents’ failings on their children through his signature lens. Brenda Wineapple on how Richard White’s The Republic For Which It Stands is really a history of Republicanism in its chronicling of American Reconstruction and the Gilded Age.
Rebecca Hazelton’s “In the City of Desire” is the featured Poem this month. For Backstory, photographer Thomas Dworzak highlights one of the ways Russian children learn about the glory days of the Soviet Union.
The January/February issue of the The New Republic is available on newsstands today.
For additional information, please contact email@example.com.
Twenty years ago, it would have been unthinkable that a white woman might have a shot at becoming the next mayor of Atlanta. In 2001, the city elected Shirley Franklin, a black woman, to city hall by such a wide margin that the race never even advanced to a runoff (she won more than 50 percent of the vote even with the vote split between several other candidates). Since then, African American politicians have continued to win the city’s mayoral races, as they have for decades.
This year, however, Atlanta came startlingly close to getting its first white mayor in 44 years. On Tuesday, voters headed to the polls for a runoff between Keisha Lance Bottoms, a city councilor with deep ties to the black political establishment, and Mary Norwood, her white, more conservative opponent. Bottoms had emerged from the first round of voting in November as the frontrunner. Her particular brand of centrist liberalism meshes well with the politics of the city as a whole. And the current mayor, Kasim Reed, who is stepping down after two terms in office, had publicly endorsed her. But rather than sailing to victory on Tuesday night, Bottoms won by just 759 votes, and now, the ballots are being recounted.
That Norwood came within striking distance of Bottoms is remarkable, not just because of her race, but also because of her party affiliation. Norwood calls herself an independent, but her opponents often paint her as a Republican. Her campaign staff had ties to the Trump campaign—her campaign treasurer endorsed Trump—and she refused to endorse Jon Ossoff in his bid last year for Tom Price’s vacated House seat. She has said she voted for Hillary Clinton last year, but since then she has stopped short of criticizing the president, who has made a point of attacking prominent African Americans. Her near-victory in a city that has voted for Democrats since 1879 has underscored how deeply divided this liberal Southern city is politically. Demographic changes and a roiling corruption scandal have eaten away at the machine’s power to corral votes in the city. And without a new strategy, Democrats may continue to face uncomfortably tight results in races like this one, which ought to have been a easy win for Kasim Reed’s handpicked successor.
In November, before the first round of voting, Bottoms and Norwood faced a crowded field of candidates. As the only conservative in the race, Norwood was widely expected to sweep the conservative vote, clinching one of the top two slots and advancing to a runoff. The nine Democrats in the race were competing for a second seat in the runoff, with Bottoms facing several strong challenges from the left. Progressive candidates like former city councilor Cathy Woolard and state Senator Vincent Fort argued that the policies that Reed had enacted in a push to attract corporations and business development to the city had in fact harmed its low income population, pushing them out into the suburbs. Fort, who boasted endorsements from Bernie Sanders and the Atlanta branch of the Democratic Socialists of America, sparred with the Democratic establishment and was critical of Bottoms, even dramatically accusing her during a debate of failing to pay her federal income taxes.
After Bottoms and Norwood advanced to the runoff, the national Democratic Party quickly fell in line. Bottoms was able to rack up an impressive slew of endorsements from prominent national politicians, including former Atlanta mayor and civil rights icon Andrew Young, New York City Mayor Bill de Blasio, and California Senator Kamala Harris. And the Democratic Party of Georgia spent more than $165,000 on an attack campaign against Norwood, the Atlanta Journal-Constitution reported.
But on a local level, there was a deep split over whether to back Bottoms. Fort, for example, pointedly refused to endorse her bid for mayor (his office did not return requests for comment this week). And many of his supporters shared his concerns. Tim Franzen, a Fort campaign aide and longtime progressive organizer, voted for Norwood, charging the Democratic establishment with becoming “out of touch on a national and local level.” A housing justice activist who works on municipal issues, he sees Bottoms as corrupt and too friendly with big developers. “I vote for candidates that I think present the least amount of threat,” he said.
Some politicians felt the same way. Shirley Franklin and the president of Atlanta’s city council Caesar Mitchell, along with three candidates who were defeated in the first round of the race, all endorsed Norwood over Bottoms. When asked to explain her endorsement, Woolard, a leftist candidate who finished third in the first round, said: “I feel like we need a clean break with this administration and a new start here with a fresh set of players.”
Bottoms has been tarnished in a roiling bribery scandal that has encompassed Atlanta’s City Hall in recent months. In September, the FBI raided the offices of an engineering firm that had been granted more than $100 million in city contracts since 2009, and Bottoms had to promise to return more than $25,000 in campaign donations she had received from the firm while she was a city councilor. Her involvement in the scandal may explain—at least in part—why some Democrats would rather have had a more conservative candidate like Norwood in office than a member of Atlanta’s blinkered, and now seemingly compromised, Democratic establishment.
Still, their refusals to support Bottoms speak to broader shift in Atlanta politics. The black establishment has lost some of its grasp on power in the city. Since the election of Atlanta’s first black mayor Maynard Jackson in 1973, Atlanta has been run like a machine, with the political elites doling out insider favors and cultivating a web of lucrative connections. The results of Tuesday’s runoff shows just how weak his machine is today. “The Maynard Machine is defunct,” Emory University political science professor Andra Gillespie told The New Republic on Wednesday.
That may be because Atlanta now has more white voters than it once did. People are flocking to Atlanta to take advantage of the job opportunities there. The influx has pushed housing prices inside the city up—and poorer African Americans out into the suburbs. Indeed, according to U.S. Census data, the black population in Atlanta declined by nearly 12 percent between 2000 and 2010 while the white population rose by more than 16 percent during that time period. And while the white population moving into Atlanta does tend to lean liberal, they are still more likely, according to Gillespie, to vote for Norwood than Bottoms, as a result of the city’s racially polarized voting patterns.
Ultimately, the Democrats need to be able to develop a new strategy to overcome these demographic shifts in cities like Atlanta, finding new ways to energize activists now that power of the Democratic machine has begun to wane. This should start with embracing leftist politicians like Fort or Woolard, who can articulate a progressive platform that has appealed to voters who have been left behind as gentrification and corporatization changes cities like Atlanta across the South. Fort and Woolard were able to harness some real energy on the ground this past summer. Put together, their shares of the vote in the first round would have been enough for a progressive candidate to advance to the runoff as the frontrunner, a clear sign that progressive candidates could make a dent in a race like this one.
But as long as the Democratic Party continues to push candidates tarnished by scandal, and those who favor big businesses over social justice advocacy, it will only continue to alienate a key, growing, progressive constituency further, opening the door a little wider for candidates with more conservative politics to step in.
Claire Foy knows how to breathe. She expresses more with each exhalation than most actors do with their whole bodies. In a scene from the second season of the Netflix drama The Crown, which premieres tomorrow, we see Foy, as a 30-year-old Queen Elizabeth II, attending the ballet alone. It is 1956, and her husband, Philip, the Duke of Edinburgh, is somewhere at sea. What began as a short yacht trip to open the Summer Olympics in Melbourne stretched into an extended five-month circumnavigation; the Queen remained in England with the children while the Duke sailed around the globe with no clear date of return.
When we see the Queen in her box at the ballet, she has just learned from her mischievous younger sister, Princess Margaret (the slinky, droll Vanessa Kirby) that Philip’s best friend and private secretary, Mike Parker, has earned a reputation for arranging dalliances between aristocrats and beautiful women, namely ballerinas. As the principal dancer twirls across the stage in a tutu, she keeps glancing up at the Queen in her box with a glare that transmits both fear and smugness. Did the prima ballerina have an affair with Philip, or are her knowing glances a figment of the Queen’s imagination? What we do know is that the Queen can never breathe a word of discomfort to anyone. And so, she simply breathes.
We see Foy’s chest rising and falling, faster and faster underneath the weight of her diamond choker. She is experiencing as much of a royal panic attack as is allowed in full view of thousands of theatergoers (which is to say, it’s subtle). Her clavicles thrum with anxiety and her eyes dart back and forth across the stage. Confronted with a potential adulteress, she must keep her composure. This is where Foy truly shines as the monarch. She understands that embodying the Queen is an exercise in communicating volumes while saying almost nothing. In The Crown—as well as in, I would imagine, the day to day lives of the Windors—subtext is everything.
The first season of The Crown, created by the playwright Peter Morgan, was mostly prologue. If the current Elizabethan monarchy was written as a comic book, season one would be the hero’s origin story, in which she must realize and accept the gravity of her powers. Morgan has spent much of his professional life fascinated by Elizabeth’s reign. He wrote the Helen Mirren biopic The Queen, and also The Audience, a hot ticket play starring Mirren that explored much of the same terrain. The first season showed his tender interest in her youth, when she frolicked in fields with hunting dogs and bred horses.
Morgan’s script is kind, in the first season, to the 25-year-old Princess, who grew up knowing her fate but was not fully ready to embrace it. When her father, Bertie, dies suddenly and she becomes the sovereign, it takes her several months—and several episodes—to get her proper bearings. When she does find her sea legs, her transformation into a stoic leader is swift. Season one ends on a bitter note, after Elizabeth forbids her sister from marrying the man she loves, the once-divorced Peter Townshend. Both the Parliament and the Church reject Margaret’s plea to marry, citing Townshend’s past as a liability against the Commonwealth, and the Queen is forced to accept their verdict and leave her sister heartbroken. This is a cold move from a sister, but a bold move as a monarch. It is not that Elizabeth puts country over family, it is that she recognizes that for her, the two entities are never truly separate. This sets up a tension that runs throughout the second season of the show: how much of her personal life Elizabeth must sacrifice to reign supreme, and how so little of her heart remains her own.
Which brings us back to her breathing. Foy is, in the second season, a genius of restraint. So much so, that the secondary characters in the show threaten to eclipse her spotlight. Vanessa Kirby, as Margaret, is radiant as a woman in love. In the fifth episode, she falls for the swinging society photographer Anthony Armstrong Jones (Matthew Goode, cutting a sharp profile as a caustic roue in ivory cashmere) and she manages to veer wildly between immature infatuation and petulant entitlement, icy sarcasm and besotted romantic. Matt Smith, as Philip, is also given prime rib scenes this season, as we see him grow grizzled and cocky on the royal yacht in the absence of his wife and then callow and mean when the rumors bring her to the ship for a tense confrontation about how to save the marriage.
This tete-a-tete on the boat, which opens the season, becomes the driving narrative force of the first three episodes. Morgan shows us the fight once, and then two episodes later—after lingering on the conflict over the Suez Canal and the Prime Minister’s clandestine plan to foment it—he shows it to us again. This time, we dive deep into the drama that has led to the face-off, from Philip’s gallivanting to Elizabeth’s increasing insecurities about her age and desirability. It hits hard when Elizabeth tells him that they are “like no other couple in the country” because they have no escape hatch; they are at the point of dysfunction where any other pair would separate. Instead, she offers to make Philip a Prince, to mollify his complaints that even his young son outranks him. In the end, it is a band-aid on their troubles, but then, Philip seems like a cat with the cream once he wears the red velvet crown.
The Crown is one of Netflix’s most expensive and expansive investments to date. Morgan reportedly got a $100 million budget to make the first season. The result is a period piece that is so sumptuous and baroque that it feels deeply transporting, the show feels literally fit for a queen. Often this excess is intoxicating. The costuming on the show is impeccable, as are the crisp interiors, the brisk foxhunts at dawn, the crown jewels that gleam like lighthouses from Foy’s appendages.
And yet, spending so much time among the inherited riches can make one feel dizzy. As Morgan himself has said of the monarchy, “It is clearly a deranged institution and a completely insane system, but perhaps it’s the insanity that makes it work.” He is so fascinated by the royal family because deep down, he questions their purpose, even when faithfully recreating the Queen’s drawing room down to the last gilded ottoman. In Vanity Fair, Morgan said that he sees his work on The Crown as being closer to The Godfather than to Downton Abbey:
It is essentially about a family in power and survival. I wish I could get to write sequences like the revolving door and shooting people and horses’ heads, but I can’t. Because this is not a violent family.”
The Corleones are the authors of their own downfall, and while the Windsors continue to flourish in England, it is clear that Morgan hopes with the second season of The Crown to destabilize the monarchy myth that he set up so seamlessly in the first. Where the first season depicted Elizabeth as a bit of a naif, new to the job and wobbly as a result, the second season sees her fully shouldering the work, while clearly suffering privately from a deep unhappiness. This interior drama may be where Morgan and his collaborators are inventing: We still do not know if Philip really strayed, or if it made the Queen despondent. But it is clear that The Crown, which was gave us a gentle portrait of the sovereign’s early years in its first season, may now raise some eyebrows at Buckingham Palace.
If Megan Markle streams The Crown for insight into her new life, she will see her future grandmother-in-law making impossible choices, choices that would not have at one time allowed her into the narrative. But, as Foy so brilliantly conveys in the new season, there was so little leeway in the young queen’s life. She sacrifices her own contentment, her own comfort, in order to maintain an even keel. Inside, she is seething. But she is remembering to breathe.
We tend to think of democracies dying at the hands of men with guns. During the Cold War, coups d’etat accounted for nearly three out of every four democratic breakdowns. Military coups toppled Egyptian President Mohammed Morsi in 2013 and Thai Prime Minister Yingluck Shinawatra in 2014. In cases like these, democracy dissolves in spectacular fashion. Tanks roll in the streets. The president is imprisoned or shipped off into exile. The constitution is suspended or scrapped.
By and large, however, overt dictatorships have disappeared across much of the world. Violent seizures of power are rare. But there’s another way to break a democracy: not at the hands of generals, but of elected leaders who subvert the very process that brought them to power. In Venezuela, Hugo Chávez was freely elected president, but he used his soaring popularity (and the country’s vast oil wealth) to tilt the playing field against opponents, packing the courts, blacklisting critics, bullying independent media, and eventually eliminating presidential term limits so that he could remain in power indefinitely. In Hungary, Prime Minister Viktor Orbán used his party’s parliamentary majority to pack the judiciary with loyalists and rewrite the constitutional and electoral rules to weaken opponents. Elected leaders have similarly subverted democratic institutions in Ecuador, Georgia, Peru, the Philippines, Poland, Russia, Sri Lanka, Turkey, Ukraine, and elsewhere. In these cases, there are no tanks in the streets. Constitutions and other nominally democratic institutions remain in place. People still vote. Elected autocrats maintain a veneer of democracy while eviscerating its substance. This is how most democracies die today: slowly, in barely visible steps.
How vulnerable is American democracy to such a fate? Extremist demagogues emerge from time to time in all societies, even in healthy democracies. An essential test of this kind of vulnerability isn’t whether such figures emerge but whether political leaders, and especially political parties, work to prevent them from gaining power. When established parties opportunistically invite extremists into their ranks, they imperil democracy.
Once a would-be authoritarian makes it to power, democracies face a second critical test: Will the autocratic leader subvert democratic institutions or be constrained by them? Institutions alone are not enough to rein in elected autocrats. Constitutions must be defended—by political parties and organized citizens, but also by democratic norms, or unwritten rules of toleration and restraint. Without robust norms, constitutional checks and balances do not serve as the bulwarks of democracy we imagine them to be. Instead, institutions become political weapons, wielded forcefully by those who control them against those who do not. This is how elected autocrats subvert democracy—packing and “weaponizing” the courts and other neutral agencies, buying off the media and the private sector (or bullying them into silence), and rewriting the rules of politics to permanently disadvantage their rivals. The tragic paradox of the electoral route to authoritarianism is that democracy’s enemies use the very institutions of democracy—gradually, subtly, and even legally—to kill it.
The United States failed the first test in November 2016, when it elected a president with no real allegiance to democratic norms. Donald Trump’s surprise victory was made possible not only by public disaffection but also by the Republican Party’s failure to keep an extremist demagogue from gaining the nomination.
How serious a threat does this now represent? Many observers take comfort in the U.S. Constitution, which was designed precisely to thwart and contain demagogues like Trump. The Madisonian system of checks and balances has endured for more than two centuries. It survived the Civil War, the Great Depression, the Cold War, and Watergate. Surely, then, it will be able to survive the current president?
We are less certain. Democracies work best—and survive longer—when constitutions are reinforced by norms of mutual toleration and restraint in the exercise of power. For most of the twentieth century, these norms functioned as the guardrails of American democracy, helping to avoid the kind of partisan fights-to-the-death that have destroyed democracies elsewhere in the world, including in Europe in the 1930s and South America in the 1960s and 1970s. But those norms are now weakening. By the time Barack Obama became president, many Republicans, in particular, questioned the legitimacy of their Democratic rivals and had abandoned restraint for a strategy of winning by any means necessary. Donald Trump has accelerated this process, but he didn’t cause it. The challenges we face run deeper than one president, however troubling this one might be.
The reason no extremist demagogue won the presidency before 2016 is not the absence of contenders for such a role. To the contrary, extremist figures have long dotted the landscape of American politics, from Henry Ford and Huey Long to Joseph McCarthy and George Wallace. An important protection against would-be authoritarians has not just been the country’s firm commitment to democracy but, rather, our political parties, democracy’s gatekeepers.
Because parties nominate presidential candidates, they have the ability—and the responsibility—to keep antidemocratic figures out of the White House. They must, accordingly, strike a balance between two roles: one in which they choose the candidates that best represent the party’s voters; and another, what the political scientist James Ceaser calls a “filtration” role, in which they screen out those who pose a threat to democracy or are otherwise unfit to hold office.
These dual imperatives—choosing a popular candidate and keeping out demagogues—may, at times, conflict with each other. What if the people choose a demagogue? This is the recurring tension at the heart of the U.S. presidential nomination process, from the founders’ era through today. An overreliance on gatekeeping can create a world of party bosses who ignore the rank and file and fail to represent the people. But an overreliance on the “will of the people” can also be dangerous, for it can lead to the election of a demagogue who threatens democracy itself. There is no escape from this tension. There are always trade-offs.
For most of American history, political parties prioritized gatekeeping over openness. There was always some form of the proverbial smoke-filled room for this. In the early nineteenth century, presidential candidates were chosen by groups of congressmen in Washington, in a system known as “Congressional Caucuses.” Then, beginning in the 1830s, candidates were nominated in national party conventions made up of delegates from each state. Any candidate who lacked support among each party’s network of state and local politicians had no chance of success. Primary elections were introduced during the Progressive era in an effort to dismantle excessive party control. But these brought little change—in part because many states didn’t use them, but mostly because the elected delegates weren’t required to support the candidate who won the primary. So real power still remained in the hands of party officials and officeholders.
These “organization men” were hardly representative of American society. Indeed, they were the virtual definition of an old-boys network. Most rank-and-file party members, especially the poor and politically unconnected, women, and minorities, weren’t represented in parties’ smoke-filled rooms and so were largely excluded from the presidential nomination process. The convention system, on the other hand, was an effective gatekeeping institution. It systematically filtered out would-be authoritarian candidates. Dangerous outsiders simply couldn’t win the party nomination. And as a result, most didn’t even try.
This changed after 1968. The riotous Democratic National Convention in Chicago triggered far-reaching reform. Following presidential candidate Hubert Humphrey’s defeat that fall, the Democratic Party created the McGovern-Fraser Commission and tasked it with rethinking the nomination system. The commission’s final report, published in 1971, cited an old adage: “The cure for the ills of democracy is more democracy.” With the legitimacy of the presidential selection process at stake, party leaders felt intense pressure to transform it. As George McGovern put it, “Unless changes are made, the next convention will make the last look like a Sunday-school picnic.” If the people were not given a real say, the McGovern-Fraser report warned, they would turn to “the anti-politics of the street.”
The commission issued a set of recommendations that the two parties adopted before the 1972 election. What emerged was a system of binding presidential primaries. This meant, in principle, that for the first time, the people who chose the parties’ presidential candidates would neither be controlled by party leaders nor free to make backroom deals at the convention; rather, they would faithfully reflect the will of their states’ primary voters. Democratic National Committee Chair Larry O’Brien called the reforms “the greatest goddamn changes since the party system.” McGovern, who unexpectedly won the 1972 Democratic nomination, called the new primary system “the most open political process in our national history.” For the first time, the party gatekeepers could be circumvented—and beaten.
Some political scientists worried about the new system. Nelson Polsby and Aaron Wildavsky warned, for example, that primaries could “lead to the appearance of extremist candidates and demagogues” who, unrestrained by party allegiances, “have little to lose by stirring up mass hatreds or making absurd promises.” Initially, these fears seemed overblown. Outsiders did emerge: Civil rights leader Jesse Jackson ran for the Democratic Party nomination in 1984 and 1988, while Southern Baptist leader Pat Robertson (1988), television commentator Pat Buchanan (1992, 1996, 2000), and Forbes magazine publisher Steve Forbes (1996) ran for the Republican nomination. But they all lost.
Circumventing the party establishment was, it turned out, easier in theory than in practice. Capturing a majority of delegates required winning primaries all over the country, which, in turn, required money, favorable media coverage, and, crucially, people working on the ground in all states. Any candidate seeking to complete the grueling obstacle course of U.S. primaries needed allies among donors, newspaper editors, interest groups, activist groups, and state-level politicians such as governors, mayors, senators, and congressmen. In 1976, Arthur Hadley described this arduous process as the “invisible primary.” He claimed that this phase, which occurred before the primary season even began, was “where the winning candidate is actually selected.” Without the support of the party establishment, Hadley argued, it was nearly impossible to win the nomination. For a quarter of a century, he was right.
In June 2015, when Donald Trump descended an escalator to the lobby of Trump Tower to announce that he was running for president, there was little reason to think he could succeed where previous outsiders had failed. His opponents were career politicians and lifelong Republicans. Not only did Trump lack any political experience, but he had switched his party registration several times and even contributed to Hillary Clinton’s campaign for the Senate. His weakness among party insiders, most observers believed, would spell his demise.
By the time Trump won the March 1 Super Tuesday primaries, however, he had laid waste to Hadley’s “invisible primary,” rendering it irrelevant. Undoubtedly, Trump’s celebrity status played a role. But equally important was the changed media landscape. Trump had the sympathy or support of right-wing media personalities such as Sean Hannity, Ann Coulter, Mark Levin, and Michael Savage, as well as the increasingly influential Breitbart News. He also found new ways to use old media as a substitute for party endorsements and traditional campaign spending. By one estimate, the Twitter accounts of MSNBC, CNN, CBS, and NBC—four outlets that no one could accuse of pro-Trump leanings—mentioned Trump twice as often as Hillary Clinton. According to another study, Trump enjoyed up to $2 billion in free media coverage during the primary season. Trump didn’t need traditional Republican power brokers. The gatekeepers of the invisible primary weren’t merely invisible; by 2016, they were gone entirely.
The fact that Trump was able to capture the Republican nomination for president should have set off alarm bells. No other major presidential candidate in modern U.S. history, including Richard Nixon, had demonstrated such a weak public commitment to constitutional rights and democratic norms. When gatekeeping institutions fail, mainstream politicians have to do everything possible to keep dangerous figures away from the centers of power. For Republicans that meant doing the unthinkable: backing Hillary Clinton.
There is a recent global precedent for such a move. In France’s 2017 presidential election, the conservative Republican Party candidate, François Fillon, who was defeated in the race’s first round, called on his partisans to vote for center-left candidate Emmanuel Macron in the runoff to keep far-right candidate Marine Le Pen out of power. And in 2016, many Austrian conservatives backed Green Party candidate Alexander Van der Bellen to prevent the election of far-right radical Norbert Hofer. In cases like these, politicians endorsed ideological rivals—running the risk of angering their party base but redirecting many of their voters to keep extremists out of power.
If Republican leaders had broken decisively with Trump, telling Americans loudly and clearly that he posed a threat to the country’s cherished democratic institutions, he might never have ascended to the presidency. The hotly contested, red-versus-blue dynamics of the previous four elections would have been disrupted, and the Republican electorate would have split. What happened, unfortunately, was very different. Despite their hesitation and reservations, most Republican leaders closed ranks behind Trump, creating the image of a unified party. The election was normalized. The race narrowed. Trump won.
America’s constitutional system of checks and balances was designed to prevent leaders from concentrating and abusing power, and for most of our history, it has succeeded. Abraham Lincoln’s concentration of power during the Civil War was reversed by the Supreme Court after the war ended. Richard Nixon’s illegal wiretapping, exposed after the 1972 Watergate break-in, triggered a high-profile congressional investigation and bipartisan pressure for a special prosecutor, which eventually forced his resignation in the face of certain impeachment. In these and other instances, our political institutions served as crucial bulwarks against authoritarian tendencies.
But constitutional safeguards, by themselves, aren’t enough to secure a democracy once an authoritarian is elected to power. Even well-designed constitutions can fail. Germany’s 1919 Weimar constitution was designed by some of the country’s greatest legal minds. Its long-standing and highly regarded Rechtsstaat (“rule of law”) was considered by many as sufficient to prevent government abuse. But both the constitution and the Rechtsstaat dissolved rapidly in the face of Adolf Hitler’s rise to power in 1933.
Or consider the experience of postcolonial Latin America. Many of the region’s newly independent republics modeled themselves directly on the United States, adopting U.S.-style presidentialism, bicameral legislatures, supreme courts, and in some cases, electoral colleges and federal systems. Some wrote constitutions that were near-replicas of the U.S. Constitution. Yet nearly all the region’s embryonic republics plunged into civil war and dictatorship. For example, Argentina’s 1853 constitution closely resembled ours: Two-thirds of its text was taken directly from the U.S. Constitution. Yet these constitutional arrangements did little to prevent fraudulent elections in the late nineteenth century, military coups in 1930 and 1943, and Perón’s populist autocracy.
Likewise, the Philippines’ 1935 constitution has been described by legal scholar Raul Pangalangan as a “faithful copy of the U.S. Constitution.” Drafted under U.S. colonial tutelage and approved by the U.S. Congress, the charter “provided a textbook example of liberal democracy,” with a separation of powers, a bill of rights, and a two-term limit in the presidency. But President Ferdinand Marcos, who was loath to step down when his second term ended, dispensed with it rather easily after declaring martial law in 1972.
If constitutional rules alone do not secure democracy, then what does? Much of the answer lies in the development of strong democratic norms. Two norms stand out: mutual toleration, or accepting one’s partisan rivals as legitimate (not treating them as dangerous enemies or traitors); and forbearance, or deploying one’s institutional prerogatives with restraint—in other words, not using the letter of the Constitution to undermine its spirit (what legal scholar Mark Tushnet calls “constitutional hardball”).
Donald Trump is widely and correctly criticized for assaulting democratic norms. But Trump didn’t cause the problem. The erosion of democratic norms began decades ago.
In 1979, newly elected Congressman Newt Gingrich came to Washington with a blunter, more cutthroat vision of politics than Republicans were accustomed to. Backed by a small but growing group of loyalists, Gingrich launched an insurgency aimed at instilling a more “combative” approach in the party. Taking advantage of a new media technology, C-SPAN, Gingrich used hateful language, deliberately employing over-the-top rhetoric. He described Democrats in Congress as corrupt and sick. He questioned his Democratic rivals’ patriotism. He even compared them to Mussolini and accused them of trying to destroy the country.
Through a new political advocacy group, GOPAC, Gingrich and his allies worked to diffuse these tactics across the party. GOPAC produced more than 2,000 training audiotapes, distributed each month to get the recruits of Gingrich’s “Republican Revolution” on the same rhetorical page. Gingrich’s former press secretary Tony Blankley compared this tactic of audiotape distribution to one used by Ayatollah Khomeini on his route to power in Iran.
Though few realized it at the time, Gingrich and his allies were on the cusp of a new wave of polarization rooted in growing public discontent, particularly among the Republican base. Gingrich didn’t create this polarization, but he was one of the first Republicans to sense—and exploit—the shift in popular sentiment. And his leadership helped to establish “politics as warfare” as the GOP’s dominant strategy.
After the Republicans’ landslide 1994 election, the GOP began to seek victory by “any means necessary.” House Republicans refused to compromise, for example, in budget negotiations, leading to a five-day government shutdown in November 1995 and a 21-day shutdown a month later. This was a dangerous turn. As norms of forbearance weakened, checks and balances began to devolve into deadlock and dysfunction.
The apogee of ’90s constitutional hardball was the December 1998 House vote to impeach President Bill Clinton. Only the second presidential impeachment in U.S. history, the move ran afoul of long-established norms. The investigation, beginning with the dead-end Whitewater inquiry and ultimately centering on Clinton’s testimony about an extramarital affair, never revealed anything approaching conventional standards for what constitutes “high crimes and misdemeanors.” The Republican House members also moved ahead with impeachment without bipartisan support, which meant that Clinton would almost certainly not be convicted by the Senate (he was acquitted there in February 1999). In an act without precedent in U.S. history, House Republicans had politicized the impeachment process, downgrading it, in the words of congressional experts Thomas Mann and Norman Ornstein, to “just another weapon in the partisan wars.”
Despite George W. Bush’s promise to be a “uniter, not a divider,” partisan warfare only intensified during his eight years in office. Bush governed hard to the right, abandoning all pretense of bipartisanship on the counsel of Karl Rove, who had concluded that the electorate was so polarized by this time that Republicans could win by mobilizing their own base rather than appealing to independent voters. And with the exception of the aftermath of the September 11 attacks and subsequent military actions in Afghanistan and Iraq, congressional Democrats eschewed bipartisan cooperation in favor of obstruction. Harry Reid and other Senate leaders used Senate rules to slow down or block Republican legislation and broke with precedent by routinely filibustering Bush proposals they opposed.
Senate Democrats also began obstructing an unprecedented number of Bush’s judicial nominees, either by rejecting them outright or by allowing them to languish by not holding hearings. The norm of deference to the president on judicial appointments was dissolving. Indeed, The New York Times quoted one Democratic strategist as saying that the Senate needed to “change the ground rules ... there [is] no obligation to confirm someone just because they are scholarly or erudite.” After the Republicans won back the Senate in 2002, the Democrats turned to filibusters to block the confirmation of several appeals court nominations. Republicans reacted with outrage. Conservative columnist Charles Krauthammer wrote that “one of the great traditions, customs, and unwritten rules of the Senate is that you do not filibuster judicial nominees.” During the 110th Congress, the last of Bush’s presidency, the number of filibusters reached an all-time high of 139—nearly double that of even the Clinton years.
If Democrats abandoned procedural forbearance in order to obstruct the president, Republicans did so in order to protect him. As Mann and Ornstein put it, “Long-standing norms of conduct in the House ... were shredded for the larger goal of implementing the president’s program.” The GOP effectively abandoned oversight of a Republican president, weakening Congress’s ability to check the executive. Whereas the House had conducted 140 hours of sworn testimony investigating whether Clinton had abused the White House Christmas card list in an effort to drum up new donors, it never subpoenaed the White House during the first five years of Bush’s presidency. Congress resisted oversight of the Iraq War, launching only superficial investigations into serious abuse cases including the torture at Abu Ghraib. The congressional watchdog became a lapdog, abdicating its institutional responsibilities.
The assault on the basic norms governing American democracy escalated during Barack Obama’s presidency. Challenges to Obama’s legitimacy, which had begun with fringe conservative authors, talk-radio personalities, TV talking heads, and bloggers, was soon embodied in a mass political movement: the Tea Party, which started to organize just weeks after Obama’s inauguration.
Two threads that broke with established norms consistently ran through Tea Party discourse. One was that Obama posed an existential threat to our democracy. Just days after Obama’s election, Georgia Congressman Paul Broun warned of a coming dictatorship comparable to Nazi Germany or the Soviet Union. Iowa Tea Partier Joni Ernst, who would soon be elected to the U.S. Senate, claimed that Obama “has become a dictator.”
The second thread was that Obama was not a “real American”—a claim that was undoubtedly fueled by racism. According to Tea Party activist and radio host Laurie Roth: “We are seeing a worldview clash in our White House. A man who is a closet secular-type Muslim, but he’s still a Muslim. He’s no Christian. We’re seeing a man who’s a Socialist Communist in the White House, pretending to be an American.” The “birther movement” went even further, questioning whether Obama was born in the United States—and thus challenging his constitutional right to hold the presidency.
Attacks of this kind have a long pedigree in American history. Henry Ford, Father Coughlin, and the John Birch Society all adopted similar language. But the challenges to Obama’s legitimacy were different in two important ways. First, they were not confined to the fringes, but rather came to be accepted widely by Republican voters. According to a 2011 Fox News poll, 37 percent of Republicans believed that Obama was not born in the United States, and 63 percent said they had some doubts about his origins. Forty-three percent of Republicans reported believing he was a Muslim in a CNN/ORC poll, and a Newsweek poll found that a majority of Republicans believed Obama favored the interests of Muslims over those of other religions.
Second, unlike past episodes of extremism, this wave reached into the upper ranks of the Republican Party. With the exception of the McCarthy period, the two major parties had typically kept such intolerance of each other at the margins for more than a century. Neither Father Coughlin nor the John Birch Society had the ear of top party leaders. Now, open attacks on Obama’s legitimacy (and later, Hillary Clinton’s) came from the mouths of leading national politicians.
In recent years, the Tea Party’s extreme views have become fully integrated into the Republican mainstream. In 2010, more than 130 Tea Party–backed candidates ran for Congress, and more than 40 were elected. By 2011, the House Tea Party Caucus had 60 members, and in 2012, Tea Party–friendly candidates emerged as contenders for the Republican presidential nomination. In 2016, the Republican nomination went to a birther, at a national party convention in which Republican leaders called their Democratic rival a criminal and led chants of “Lock her up.” For the first time in many decades, top Republican figures—including one who would soon be president—had overtly abandoned norms of mutual toleration, goaded by a fringe that was no longer fringe.
If, 25 years ago, someone had described to you a country where candidates threatened to lock up their rivals, political opponents accused the government of election fraud, and parties used their legislative majorities to impeach presidents and steal Supreme Court seats, you might have thought of Ecuador or Romania. It wouldn’t have been the United States of America.
But Democrats and Republicans have become much more than just two competing parties, sorted into liberal and conservative camps. Their voters are now deeply divided by race, religious belief, culture, and geography. Republican politicians from Newt Gingrich to Donald Trump learned that in a polarized society, treating rivals as enemies can be useful—and that the pursuit of politics as warfare can mobilize people who fear they have much to lose. War has its price, though. For now, the American political order and its institutions remain intact. But the mounting assault on the norms that sustain them should strike fear in anyone who hopes to see the United States secure a democratic future.
This article was adapted from How Democracies Die, by Steven Levitsky and Daniel Ziblatt, to be published by Crown, a division of Penguin Random House LLC on January 16, 2018.
The Weinstein effect has hit Washington hard—and probably just beginning—but has had a disparate effect on the two parties. Until recently, neither the Democrats nor the Republicans had the high moral ground on sexual harassment. Both parties had prominent sexual harassers who were defended by partisans and shielded by a power structure that discouraged victims from voicing their complaints, let alone seeking justice. The Democrats had Ted Kennedy and Bill Clinton; the Republicans had Strom Thurmond and Bob Packwood. It was easy enough for the centrist press to say “both sides do it,” and for party hacks to respond with whataboutism.
But now, thanks largely to an internal revolt by Democratic women, the equivalence between the two parties is giving way to a stark contrast. The Republicans, albeit with some foot-dragging on the part of the establishment, are well on their way to becoming the party of President Donald Trump and Roy Moore, the Senate candidate accused of molesting underage girls. Conversely, the veteran Democratic Congressman John Conyers announced his retirement on Monday, in the wake of revelations that he had settled a case with a former staffer who accused him of trying to coerce her into sex, among other instance of sexual misconduct. Conyers ostensibly resigned for health reasons, but the true reason was that party leaders like House Speaker Nancy Pelosi called on him to step down.
On Wednesday, a new story broke about Minnesota Senator Al Franken groping a woman, bringing the number of accusations against him to eight. In response, six female Democratic senators called for his resignation. By the evening, a majority of Democrats in the Senate—32 of them—had done so.
Franken is scheduled to make an announcement on Thursday. If he resigns, as he’s widely expected to do, then it’ll be impossible to deny that the two parties have increasingly distinct identities on sexual harassment issues. On Tuesday, former Arkansas Governor Mike Huckabee had asked Fox News, “But you know it’s down to the fact that as long as Al Franken is in the Senate, and Conyers is staying in office, why not have Roy Moore?” With Conyers gone and Franken on the precipice, the question becomes: If Democrats are willing to stand up to their accused sexual harassers, why do the Republicans tolerate Moore?
Republicans might reject that they’re the party of Moore. But Trump, the standard bearer of the GOP, has endorsed Moore; the Republican National Committee has decided, after initial trepidation, to fund his campaign; and Senate Majority Leader Mitch McConnell has made it clear he’ll work with Moore if he wins. Talk of expelling Moore from Senate has died down, as there does not appear to be enough Republicans who support the idea.
Arizona Senator Jeff Flake is one of the exceptions, one of the few leading Republicans who is openly opposed to a Senator Moore—so much so that he donated $100 to Moore’s Democratic opponent, Doug Jones:
This break with party loyalty was too much for Nebraska Senator Ben Sasse, who positioned himself as opposing both Moore and Jones:
The Democratic Party’s turn against Conyers and Franken was fundamentally about values. As Dara Lind wrote at Vox, for years “the Democratic Party has positioned itself as the defender of gender equality and women’s rights against Republican attacks.” The Democrats, seriously tested by offenders within their ranks, are now trying to live up to those standards.
The Republican Party is also making an affirmation of values by sticking with Moore and Trump: the defender of toxic masculinity. The gender gap in presidential politics, whereby women are markedly more likely to vote for Democrats than men, is a relatively new phenomenon in American politics. It dates back to the 1980 election, with Ronald Reagan’s embrace of the religious right. Moore is emblematic of all the dangers of that fusion of the GOP with pious fanaticism. The allegations against Moore aside, his views on gender are deeply retrograde. He co-authored a textbook arguing that women shouldn’t run for public office. In both Trump and Moore, the GOP has taken the mantle of the worst form of patriarchy: an assertion of male dominance without even the protective charade of chivalry.
Some Republican pundits are treating the Democrats newfound hardline position against sexual harassment as a political ploy:
It’s true that sacrificing Franken is easier because his replacement will be named by a Democratic governor. Still, the reason the two parties are diverging on sexual harassment has more to do with their different beliefs on gender rights than simple political expediency. After all, both Conyers and Franken are much loved among the party’s base. To punish them carries a real cost of demoralizing faithful Democrats.
It’s also true that Democrats have to be more attentive to women voters than Republicans do. Hillary Clinton won 54 percent of the female vote and only 41 percent of the male vote. A third of Democrats in Congress are women, versus only 9 percent of Republicans. Democrats in both ideology and make-up are increasingly the party of female equality and the Republicans the party of gender reactionaries.
It’s by no means clear that becoming the party of gender equality will always help Democrats, although at least one erstwhile conservative pundit sees an advantage:
Some conservatives are even trying to capitalize on the Democrats treatment of Franken.
A hardline stance against sexual harassment might help in 2018, but produce a backlash in subsequent years. It’s impossible to know for sure. But political parties shouldn’t project certain values, and hold themselves accountable to them, simply because of electoral popularity. They should stand up for what’s right on moral grounds alone, and when it comes to sexual harassment, only one party is succeeding.
“My career, at the time, was in his hands,” Allison Benedikt wrote at Slate this week, about the beginning of her relationship with John Cook, her husband of 14 years. They were colleagues at a magazine when they first kissed, and he was her senior. That kiss took place “on the steps of the West 4th subway station,” Benedikt writes, and Cook did it “without first getting [her] consent.” The piece is an intervention into the conversation on office sexual harassment, with Benedikt fearing “the consequences of overcorrection” on this issue. She does not think that “the initial touch, the scooting closer in the booth, the drunken sloppy first kiss, the occasional bad call or failed pass” are necessarily harassment, and has the happy marriage to prove it. Her piece was titled “The Upside of Office Flirtation? I’m Living It.”
Benedikt’s essay was widely shared on social media, praised for its “nuanced” approach to the messy nature of human relationships. Only a day later, however, we were reminded that there is a stark line between office flirtation and abuse. On Wednesday Lorin Stein, who himself is married to a former employee, announced that he is resigning from the editorship of The Paris Review amid an investigation into his behavior towards women in his orbit. Stein’s predation has long been a whisper-network item in literary New York. In a letter of resignation to the board of The Paris Review, Stein apologized for the way he has “blurred the personal and the professional in ways that were ... disrespectful of my colleagues and our contributors.” He said that he has come to realize that his behavior was “hurtful, degrading, and infuriating.”
Benedikt has my sympathy. She is in the tricky position of figuring out how the long-past actions of a man she loves fit into the new political landscape. If she is absolutely sure that she is a feminist, and if she is absolutely sure that she is against the harming of vulnerable people, then she is left with difficult questions: If she was merrily compliant with behaviors that are not acceptable in the workplace today, does that make her complicit with the culture of harassment? How can she defend her husband—and by extension herself—while maintaining that they were right then as well as now?
Ultimately Benedikt suggests that a man should not be condemned for the things that her husband did. But Cook did do something wrong. You shouldn’t kiss a junior colleague without asking. You probably shouldn’t kiss anybody without asking, as a rule of thumb to remember when you’re drunk. Consent is such an easy premise, and Benedikt’s reluctance to acknowledge it seems generational. Fourteen years ago affirmative consent was not such a widespread idea, and perhaps the simple words “Can I kiss you?” didn’t come so easily to a man’s lips. But the world has changed, and affirmative consent is now the standard. All college kids know this. Just ask!
It is not unreasonable to demand that men in workplaces act as if the year were 2017 and not 2003. At the same time, nobody is retrospectively prosecuting a man for acting as if it were 2003 in 2003. Nobody is hauling John Cook into the sex-crimes dock or putting Benedikt on trial for crimes against feminism. Nobody is suggesting that she thinks Stein’s behavior is okay, or that that the beginning of a loving marriage is the same thing as sexual harassment. But in writing her essay, in attempting to draw some universal principles from her specific experiences, Benedikt makes bad arguments with real-world consequences—of the kind that have kept the long-swirling rumors from Stein’s door until now.
I went to university late. I was 20 years old, and jaded from a bad relationship and a bad year at art school. Soon after starting my undergraduate degree at Oxford, I also started a relationship with a man in his thirties whose job it was to teach me. He did not coerce me; we pursued each other. I was very sad at the time and I could tell that he was too. He had moved there from another country and was isolated in the old boys’ club of Oxford. We were lonely and troubled people, and we made each other very happy. Our relationship continued for three years, until I moved to New York to work on my Ph.D. We went to weddings together. I ran up wooden staircases in buildings constructed hundreds of years ago to reach him. I slunk through shadows and took elusive cobbled paths through town to find him.
There was a lot of opportunity for coercion, but that didn’t happen: Once we started sleeping together, I made sure that my boyfriend never graded another paper by me again. I wanted to have my cake and eat it too, to sleep with a professor and keep my intellectual principles intact. I kept the relationship secret from almost all my friends. The whole thing was extremely fun, we traveled together, I loved him a lot. We didn’t get married or have kids, but I don’t regret it at all.
And I still think he did something wrong.
Professors should not have sex with their undergraduate students, even those who are older and more hardheaded and determined than the others. Academics abuse those junior to them all the time, and rely on a combination of tenure and shame to keep them out of trouble. This has also happened to me. I know that those two experiences—of a relationship and of an assault—are totally different. But they were both facilitated by the same permissive culture at universities. The first experience was good, the second was mindbendingly awful. I would have forgone the first to avoid the second.
The flaw in Benedikt’s argument is that it is so narrowly focused. It’s as if she thinks that the #MeToo campaign wants to take her marriage away. If Cook hadn’t kissed her on the steps of West 4th Street station in the light of the Duane Reade, she implies, she wouldn’t be married with those beautiful children. And then what would her life have been like? This is who I am, she seems to say.
When I say that professors shouldn’t sleep with their students, but that I don’t regret the time that my professor and I slept together, I am not contradicting myself. None of us can go back in time to change the past, nor do I have sufficient insight to know what life would have been like if I had never had that relationship. But I do know what I believe is right, right now. Justifying my own past is less important than protecting the vulnerable.
In muddling her experiences with her beliefs, Benedikt makes several missteps. The first is her undermining of consent as a crucial principle, and the endorsement of nonverbal seduction cues over verbal ones. “It is completely within the norm of human exploratory romantic behavior for people to take steps—sometimes physical steps—to see if the other person reciprocates their feelings,” she writes. It may have been the norm at one point, but no longer, at least when one person has professional or hierarchical power over the other. Use words. In California it is the law that state-funded colleges use the affirmative consent standard in investigating harassment cases. The law!
The second is subtler: the strong implication that the happy ending of heterosexual marriage and procreation excuses transgression. We see this idea all the time. If a horrible and destructive affair destroys a first marriage, but the second marriage produces children and a longstanding relationship, the transgressors are forgiven because it was fairytale fated. This is a toxic concept that abets dishonesty and asserts the happiness of people who marry and reproduce over other people’s.
The third is rhetorical. Benedikt extracts universal principles from her story, but there are so many contingent factors influencing her story to which the reader is not privy. There are nonverbal and verbal cues that are not in her piece that undoubtedly clarified the pair’s attraction to one another. There is the element of atmosphere, the intangible flavor that defined whether she, the inferior in this power dynamic, was or was not afraid. There is the personality of John Cook himself, who was interested in her as a partner and not as a victim. There was the enthusiasm of her nonverbal consent, which changes everything. In fact, “enthusiastic consent” is used as a standard in some colleges.
These contingent factors made Benedikt’s individual experiences okay. But because those factors are so various and so unpredictable, they cannot possibly be legislated for in harassment rules. When women say “do not kiss a junior colleague without consent,” they do so to protect the people who could be harmed by that action and are not otherwise empowered to protest. After all, we do not tell drivers that it is alright to drive drunk when there is nobody else on the road. We tell drivers never to drive drunk.
Benedikt is concerned that men may be unfairly targeted in the new anti-harassment culture. But in arguing for her husband’s innocence—which nobody was really concerned with, since this is a political discussion and not a personal accusation—Benedikt undermines the general position of harassment victims. She effectively backs up the platonic version of the man who says, “I didn’t know that what I was doing was wrong,” in order to defend the person that she loves—a position that was echoed in Stein’s statement. That is not how we put together responsible political arguments.
Every moment is only a single version of a multiverse of possibilities. If I hadn’t missed my train yesterday morning, I wouldn’t have had time to buy coffee. What would I have been thinking then, if I hadn’t gotten riled up on caffeine on the L train and formulated the outlines of this piece? It’s a rhetorical question, because we cannot know. I cannot know what other loves or joys I would have experienced if I hadn’t dated that professor. Benedikt cannot know whether she would or wouldn’t have married John Cook if he’d done things differently. Maybe she would have married someone else. Maybe she wouldn’t have married. Maybe she’d have been hit by a bus. The lives we live—the children we have, the people we marry—are not predetermined or specially right because they happen to have happened.
A few weeks ago I wrote here that the #MeToo campaign is not a positive assertion of feminist solidarity, but rather a shared experience of what has been done to us by others. When we come together to recall those times when we were harassed, we are raising our consciousnesses but we are not actually advocating for anything. The really important stuff happens after we’ve shared our experiences, and start deciding what to do next. I understand what Allison Benedikt is feeling, but that does not justify her solipsistic ambivalence about the anti-harassment campaign. Lorin Stein’s fall is the latest evidence that a new world is available, suddenly. We all experience the world as atomized beings, defined by our own pasts. But this moment is about our future.
On Wednesday morning, R.L. Miller woke up and went to Starbucks, where she and her neighbors stared at a bright orange sky and wondered whether it was safe to breathe. Usually, the Southern California native spends her days advocating for climate action as the president of Climate Hawks Vote, a grassroots super PAC. But on Wednesday, as catastrophic wildfires roared about 20 miles away from her home in Ventura County, she was a potential victim of climate change, too. “I’m scared,” she said, adding that one of the blazes north of her was moving south. “This is real.”
There have been many victims of the ongoing wildfires in Southern California, the largest of which is the Thomas Fire, a 101-square-mile monster blaze north of Los Angeles. The L.A. Times reports that the fire jumped the 101 freeway and was stopped only by the Pacific Ocean, along the way burning 50,500 acres, destroying 150 structures, and forcing 27,000 people to evacuate. Californians are used to wildfires, but Miller says these ones are unseasonable. “We usually get crazy wildfires in October, and then the first rains come in November, and ground stays wet and more rains come, and there’s no wildfire threat,” she said. “It’s early December .... This is happening because there is no more winter rain. There’s not enough winter rain, ever.”
Climate science backs up Miller’s observation. The state’s wildfire seasons are lasting longer and burning stronger due to human-caused climate change, as rising temperatures make vegetation drier and causes states like California to whip between very dry and very wet seasons. These current fires are so bad because of a mixture of dry foliage and low humidity, but also because of hot, dry winds blowing up to 70 miles per hour. This seasonal high wind, known as the Santa Ana winds, is not unusual for this time of year, climate scientist Daniel Swain told the Verge. But some scientists believe climate change “may be making these strong winds drier,” according to the New York Times.
These observations are nothing new to most Californians, who have known for a while that many of global climate change’s worst impacts affect their state. The news keeps getting worse: Just this week, a new study in the journal Nature Communications found that a persistent atmospheric high-pressure ridge in the Pacific Ocean could occur more often with climate change, causing more frequent and more dangerous droughts in the state. Jennifer Francis, a research professor at Rutgers University’s Institute of Marine and Coastal Sciences, told me this has implications for the Santa Ana winds as well. “The [high-pressure ridge] not only causes hot, dry weather in CA, but also tends to create offshore winds—known as Santa Ana winds—that often fan wildfires,” she said.
These fearsome trends are precisely why California is the most aggressive state on climate policy in the country. State Governor Jerry Brown led the coalition of U.S. states who went to France this year to tell the world that they would meet the terms of the Paris agreement to fight climate change, despite President Donald Trump’s pledge to pull the country out. The state has the toughest fuel-economy standards in the nation, and some of the toughest emissions regulations on the fossil fuel industry. The state government even has a Climate Action Team that works to coordinate various emissions reductions programs across the state.
California is uniquely aggressive in fighting climate change, but it’s not uniquely a victim. Eventually everywhere in the United States, not just the coastal regions, will suffer from severe climate impacts. Climate scientists have documented how global warming stands will hit the rest of America, whether it be through more extreme precipitation in the northeast or crop failure in the heartland. But reality has shown it to us, too. Hurricane Harvey brought Houston, Texas, its worst rainfall and flooding in recorded history. The risks of sea-level rise in Florida was made more apparent by Hurricane Irma, which flooded city streets and destroyed sea-walls. In Oklahoma, the temperature reached 100 degrees in the dead of winter.
This year’s mind-boggling extreme weather has shown us that climate change will leave few Americans untouched. And yet, so many states refuse to do much about it. That angers climate scientist Michael Mann, director of the Earth System Science Center at Pennsylvania State University. “The other states—Florida, Texas, Oklahoma—are under the control of climate change-denying politicians who continue to bury their head in the sand about climate change as they do the bidding of the fossil fuel interests who fund them, with the people they are supposed to be representing paying the cost in the form of devastating climate change-aggravated damage,” he said.
But for some, even California’s actions haven’t been enough. Miller, of Climate Hawks Vote, is one of climate crusader Jerry Brown’s strongest critics, urging him to stop the controversial practice of fracking in the state and to end his friendly relationship with the state’s oil industry. Now, the wildfire threat to her friends and family has upped the ante. “I am more determined than ever to fight and to speak out,” she said. “Never giving up, never backing down.”
President Donald Trump ran on a campaign of America First, but so far, he has governed on Trump First—particularly when it comes to foreign policy. On no issue is that more apparent than his decision to formally recognize Jerusalem as Israel’s capital and set in motion plans to eventually move the U.S. embassy there from Tel Aviv.
“Today we finally acknowledge the obvious—that Jerusalem is Israel’s capital,” Trump declared Wednesday from the White House’s Diplomatic Reception Room, noting that the city is the seat of Israel’s government—home to its legislature, Supreme Court, and prime minister and presidential residences. But here’s another reality worth acknowledging: While other presidents have wanted to recognize Jerusalem as Israel’s capital, they knew that it needed to be done in the context of a final peace agreement with Palestinian consent; otherwise, it could incite violence, even, quite possibly, war. So why does Trump feel compelled to do this now?
After a difficult first year in office, with few accomplishments and more than a few scandals, Trump wanted to please evangelical supporters, who comprise much of his base, on an issue they care deeply about. “While previous presidents have made this a major campaign promise, they failed to deliver,” he said in his speech. “Today, I am delivering.” It was no accident that Vice President Pence, a hero to that constituency, stood by his side during his announcement, in a room adorned with Christmas decorations.
“Is he doing this because of domestic politics? I think there you’ve got a real possibility. Obviously, this is a president who has catered mostly to his base,” said Shibley Telhami, a Palestinian American scholar and professor at the University of Maryland. “This is a president who has shown, over and over, that he’s more interested in what’s good for Donald Trump.”
But with this decision, Trump is not exactly exemplifying the art of the diplomatic deal.
“What we know is that he’s not doing it because it would be good for America’s interests in the Middle East,” Telhami added. “How do we know this? Because he’s doing this well before he announces the parameters of his plan that’s supposed to be the deal of the century among Israelis and Palestinians. It’s bound to be highly controversial, hard to sell, made impossible to sell under any circumstances and this obviously jeopardizes that possibility.”
Ambassador Dennis Ross, a former Middle East adviser to President Barack Obama, characterized the White House’s argument for this recognition as “removing an ambiguity and creating a kind of honesty” about the de facto status of Jerusalem. “But the fact that all of our allies and all of our Arab partners have called on him not to do this, it’s hard to see how it necessarily advances our interests,” he said. “What it does is it reflects something he wanted to do. That’s clear.”
It’s a recurring temptation for American presidents to think they will be the ones to bring peace to the Middle East. Presidents Bill Clinton, George W. Bush, and Obama believed that the Israeli–Palestinian conflict was the root of all Middle East instability. They believed that if you solve this problem, you’ll solve a lot of other problems. Over the course of the last eight years, however, a number of events transpired that debunked that theory: The Arab uprisings of 2011, the civil war in Syria, the rise of the Islamic State. These had nothing to do with the Israeli-Palestinian issue. Instead, they revealed that Arabs in the Middle East were more enraged with their own abysmal leadership and consumed by their own religious and sectarian disputes. Israel–Palestine just wasn’t as important to them.
But then Trump stepped into the Oval Office and called an Israeli–Palestinian peace accord one of his “highest priorities.” He tasked his son-in-law, Jared Kushner, with trying to renew peace negotiations; invited both Israeli Prime Minister Benjamin Netanyahu and Palestinian Authority President Mahmoud Abbas to the White House within his first months in office; and travelled to Israel and the West Bank after just four months as president.
For all the time, energy, and capital he was devoting to resolving this issue, it made sense when, in May, he signed a waiver delaying for six months the U.S. Embassy in Israel’s relocation to Jerusalem—a decision forced on him by a 1995 law requiring the president to transfer the U.S. mission to the holy city, but granting him the prerogative to postpone it for six months at a time on national security grounds. Every president has repeatedly exercised that right for the last 22 years. So, too, did Trump. There was no reason, after all, to infuriate the Palestinians and roil the rest of the Middle East with an inflammatory move that was sure to instigate controversy over a sensitive final-status issue, one of the key matters needed to be resolved to end the conflict, just as he was embarking on an uphill quest to succeed where his predecessors failed and rescue the moribund peace process.
Besides the priority most presidents place on promoting and maintaining stability in the world’s most turbulent region, recognizing Jerusalem as Israel’s capital right now, and under these conditions, makes little sense coming from the author of The Art of the Deal. “I don’t believe in giving away things for free,” Aaron David Miller, a veteran Middle East peace negotiator for multiple administrations, Democrat and Republican alike, told me last week. If Trump has something that Israel ostensibly really wants, then it is something he could use to extract a concession from the Israelis down the road as part of peace negotiations. Why just give it away now, for nothing in return? “This isn’t a transaction as much as it is an effort to make a point,” Miller said.
Trump will try to alleviate the damage caused by this decision, which will upend decades of U.S. foreign policy and undoubtedly create the perception the United States favors Israel in the conflict. “We are not taking a position on any final status issues, including the final boundaries or the resolution of contested parties. Those questions are up to the parties involved,” Trump said Wednesday, adding that he would support a two-state solution, if agreed to by both sides.
That is nothing new. “I’m looking at two states and one state, and I like the one that both sides like,” he said in February. The president also did not couple his announcement with assurances that a U.S. Embassy in Jerusalem will be in pre-1967 West Jerusalem—the part of the city that was Israeli before the Six-Day War, and which all viable peace plans have proposed that Israel retain.
But these moves likely won’t be enough to calm rising tensions. “Moving the U.S. Embassy is a dangerous step that provokes the feelings of Muslims around the world,” King Salman of Saudi Arabia told Trump in a phone call on Tuesday. King Abdullah II of Jordan similarly warned him that doing so would have “dangerous repercussions.” Shortly after Trump’s call with Arab leaders, Palestinian protestors already took to Bethlehem’s Manger Square and began burning pictures of the American president. One sign read: “Trump: Keep your populism away from Jerusalem.”
In the Tuesday briefing with reporters, a senior administration official said, “The president will reiterate how committed he is to peace. While we understand how some parties might react, we are still working on our plan which is not yet ready. We have time to get it right and see how people feel after this news is processed over the next period of time.” But that’s precisely why it was a mistake for Trump was to make this decision before having a peace plan ready: It robs the process of a period of calm. By the time Trump’s proposal for an accord is drafted—Kushner said this weekend it would come eventually, but gave no timetable—it may prove impossible to get both Israelis and Palestinians to the negotiating table.
But perhaps that’s the point, despite Trump’s assurances Wednesday that this move does not diminish his commitment to achieving Middle East peace. “The president may have really already written off the deal,” said Telhami, the UMD professor, “and is looking for a way out.”
The fire hall in Nickerson, Nebraska, population 350, was packed. The metal chairs set up for the planning- board meeting were all taken. There was no room left in the aisles. Folks had squeezed into the back of the room, and the crowd stretched out the front doors and into the parking lot, where the latecomers were left pacing in the gathering dark, demanding to be let inside. Most Monday nights, the village board is lucky if one or two people show up to voice an opinion on municipal matters. On April 4, 2016, there were hundreds—so many that the board moved the meeting to a larger room and opened the windows to let the people outside hear.
The zoning committee was planning to act on a proposal that had caught the attention of everyone in the area. An unnamed company wanted to build a massive chicken-processing plant, designated in filings only as “Project Rawhide,” on a tract of agricultural land about halfway to Fremont, a larger town a few miles to the south. The plant, according to a report submitted by the company, would create 1,100 new meatpacking jobs, support 125 area farmers by buying 1.6 million chickens per week, and bring in an estimated 2,000 additional jobs indirectly linked to the project. As people poured into the fire hall, someone from the Greater Fremont Development Council, which had courted the project, handed out fact sheets, claiming that it would bring in a quarter of a billion dollars in revenue.
Randy Ruppert didn’t buy the math. He farmed 220 acres not far from the planned site just outside Fremont, where he had retired after 48 years of working for railroad companies as a manager of major construction projects. He was trained to think through problems systematically, and he envisioned a host of undiscussed downsides: truck traffic, odor, inadequate waste disposal, public health risks. One of his last assignments, at RailWorks Track Systems, had been to help the meatpacking giant Cargill head off the swamping of its corn-milling plant in Blair, Nebraska, about 20 miles east of Nickerson, during the Missouri River flood of 2011. The city of Blair had raced to stack a mile of sandbags around its municipal wastewater-treatment facility, and Cargill ended up building an eight-foot-high, three-mile-long emergency levee at a cost of more than $20 million. This new poultry plant was to be sited squarely in the flood plain of the Lower Platte River. What would happen when heavy rains came again?
The Lower Platte’s watershed was already classified as the sixth most polluted in the United States, thanks to another Cargill plant, in Schuyler, Nebraska, 30 miles upstream. Ruppert had become something of an environmentalist in his retirement, trying to persuade area corn and soybean growers to switch to no-till farming and to plant buffer strips along their streams to keep their fertilizers from adding to the toxic runoff. He even restored 65 acres of his own pastureland to high-diversity prairie. Millions of chickens would not only undo that work but also leave the river water unsuitable—even untreatable—for drinking. Discreetly, he began talking to neighbors. Ruppert is an unassuming man, gregarious but soft-spoken, a blue-jeans-and-ball-cap guy who earns respect with his precise attention to detail. (He likes to relax by spending unbroken hours engraving decorative designs into rifle parts under a microscope.) After he’d spent a few days laying out his concerns, hand-painted signs started appearing along the highway: NO PROJECT RAWHIDE, CHICKENS STINK, COMING SOON: BIRD FLU.
Ruppert also decided to contact Jane Kleeb, the founder of a nonprofit advocacy group called Bold Nebraska, which had become the leading force in the state against the proposed Keystone XL pipeline. For nearly eight years, Kleeb had kept that project on hold by rallying farmers and ranchers around concerns over contamination of surface and groundwater in the event of a spill—until finally, in November 2015, President Barack Obama announced that he was rejecting the plan. (President Trump has reversed that decision, but Bold Nebraska’s efforts continue to hold up construction in Nebraska.) Ruppert figured Kleeb could help him build opposition to what he considered another threat to the state’s waterways by contacting the people on Bold Nebraska’s mailing list and its followers on social media. Ruppert spoke with Kleeb on the phone a couple of times, and she agreed to come to the board meeting in Nickerson and even go to his home beforehand to talk with people from the community about how to fight the plant.
Kleeb told me that she wanted to get a feel for the commitment of Ruppert’s group “to see if there was a winnable campaign,” but she also wanted a clear sense of everyone’s motives. Fremont is an old meatpacking town that has been transformed in recent decades by the industry’s use of undocumented labor. In that time, the community has also come to be synonymous with America’s rising nativist sentiment, due to a protracted and bitter legal battle it has waged against undocumented Hispanic workers at the Hormel Foods plant in town. When Bold Nebraska alerted its membership online to local concerns about Project Rawhide, several of them raised objections to being seen as partnering with xenophobes and racists.
Kleeb decided to go to Fremont anyway. At Ruppert’s home, she warned everyone about their perception problem and said they needed to limit their protests to concerns about water quality. Afterward, on the drive to the fire hall, her minivan fell in with the caravan of pickups and SUVs. “I was following this guy who had a Tea Party bumper sticker and a pro-gun bumper sticker,” she said. The driver was Doug Wittmann, the head of Win It Back, the local Tea Party organization. (He also happened to be Randy Ruppert’s brother-in-law.) After they parked along Nickerson’s main drag, Wittmann approached Kleeb on the sidewalk and asked her what she thought of his stickers. If Kleeb’s years of organizing fiercely conservative ranchers in the Nebraska Sandhills had taught her anything, it was to build alliances around issues rather than identity. “Here’s the thing,” she told Wittmann. “We’re definitely not going to agree on gun policy, but we will agree on water and property rights.”
The crowd inside the fire hall was unexpectedly tense. The zoning committee had announced that it would take no comments from the community; the members just wanted to rule on the procedural question of reclassifying the property to commercial use. If anyone wanted to take up larger issues, they should talk to the village board, which would meet immediately afterward. With that, the committee voted unanimously to approve the rezoning. A ripple of anger coursed through the room. Some people got up to leave, thinking that the project was already a done deal.
When everyone was herded back into the meeting room after the break, the village board asked if anyone wanted to speak. John Wiegert, a local schoolteacher who had driven up to the meeting from Fremont with his sister, said that he’d intended to sit quietly. But after an awkward pause, and with no one rising to take the floor, he stood up.
“Hell, I’ll talk,” Wiegert said.
The development council had been handing out fliers touting the new jobs this project was going to create, he said, but how was a town this size going to fill 3,000 new positions? “I’m worried about the type of workers that this will attract,” he said. He knew that in recent years the meatpacking industry had turned away from undocumented Hispanic labor. New workers now typically had legal status, but that didn’t mean they were U.S. citizens. In the poultry industry, many of the workers could be political refugees from Somalia. “Being a Christian, I don’t want Somalis in here,” he said. “They’re of Muslim descent.”
Ruppert says that he shook his head, quietly cursing Wiegert’s every word. This was exactly what Kleeb and Bold Nebraska had told them not to do. But Wiegert pressed on. “You know where they’re going to live?” he asked. “In our neighborhoods.” He pointed, one by one, to people around the room. “They’re going to live next to you and you and you—and me.”
At the end of the night, the board unanimously rejected the proposed plant.
John Wiegert’s crusade against immigrants was well-known to the people at the meeting. In late 2008, he had led a petition drive to enact a city ordinance that would bar “illegals” from renting apartments, buying homes, or holding jobs in Fremont. To make sure the wording of the ordinance would stand up to legal scrutiny if it passed, Wiegert spoke with Kris Kobach. As a representative of the right-wing Federation for American Immigration Reform (FAIR), Kobach had authored similar ordinances in other small towns in Pennsylvania, Kansas, and Texas. Kobach, Wiegert said, was sure he could write a measure for the official ballot issue that would hold up in court. But he wanted to know that the petitioners were ready for the long legal battle that would follow. Wiegert told Kobach not to worry. “Once I get started on something,” he said, “I go through with it.”
The measure not only passed with 57 percent of the vote in Fremont but survived years of legal challenges reaching all the way to the Nebraska Supreme Court. When I discussed the case with Kobach in 2012, as part of a book I wrote about Hormel and modern pork production, he described his trip to Fremont as “a refreshing vignette—no, let me use the English word—a refreshing little picture of citizens who were taking this issue on, who were trying to get expert help, and who were just delighted that I was willing to help them.”
In 2013, just as the ordinance was about to be formally implemented, the Department of Housing and Urban Development (HUD) stepped in, claiming that the ordinance, which required would-be renters to apply for an occupancy license that included a statement of immigration status, violated the Fair Housing Act. Fremont would have to either repeal the housing portion of the ordinance or face the prospect of losing community development block grants that the city received each year from the federal government. The city might even have to pay back millions in grants it had already received. Fremont Mayor Scott Getzschman urged the city council to repeal the housing clause, explaining that it was largely symbolic anyway: Most undocumented workers employed at area meatpacking plants were already segregated into a large trailer park in Inglewood, just south of the Union Pacific tracks but still technically outside of town, or down U.S. Route 30 in Schuyler. The ordinance did nothing but ensure that meatpacking workers continued to pay rent in neighboring communities.
With support from conservative politicians around the state, Wiegert spearheaded an effort to stop the city from unilaterally repealing the housing clause. “We have a crooked government,” he told the city council at a meeting that I attended in late 2013. “You guys should be ashamed of yourselves.” Under mounting pressure, the council finally caved and decided not to take up the issue themselves but instead to put the ordinance to another vote. In February 2014, Fremont held a repeal election—and the measure was retained, this time with 60 percent of the vote. The citizens of Fremont were clear: They would rather lose all federal funding and pay back what the city had already been given than comply with federal law governing housing and immigration status. “The people have spoken,” Wiegert told the Omaha World-Herald. “Hopefully, they’ll get the message at City Hall, finally. They need to listen to the people of Fremont.”
Kobach has since ridden the notoriety of his anti-immigrant stance to election as Kansas secretary of state, then a spot as an immigration policy adviser to Donald Trump’s presidential campaign, and now a place as co-chair of Trump’s election commission, leveling allegations of widespread election fraud by undocumented immigrants. The outcome for Fremont has been less satisfactory. The ordinance left the town in a financial crisis. Fearing the loss of federal funds, Fremont made cuts to the city budget. Park lawns went unmowed. There were no meter maids to write parking tickets. More significantly, the city attorney advised the city council to set aside funds for a potential legal battle over the ordinance. Based on the bills incurred over Kobach-authored ordinances in Texas and Pennsylvania, Fremont established a $2 million defense fund. Looking at the cash-strapped municipal budget, the mayor and city council decided that Fremont needed to attract additional businesses to broaden the tax base, but after all of the negative press attending the passage of the anti-immigration ordinance, only one company could be coaxed into moving to town.
The initial obfuscation of Project Rawhide was dropped soon after the Nickerson meeting, and the unnamed company turned out to be Costco, the world’s largest retailer of rotisserie chickens. The wholesale chain, third in sales only to Walmart and Amazon, was poised to make a major play in the middle states, and its bargain-basement rotisserie chickens—the chain’s signature loss-leader—would bring customers through the doors. All Costco needed was one of the country’s largest chicken-processing plants, located somewhere near the dead-center of the country. When Wiegert helped convince the Nickerson village board to reject the Project Rawhide proposal, Costco didn’t give up. It issued a new plan, moving its site only a few miles, to a location just south of Fremont, directly adjoining the Hormel plant.
After Costco announced the move to Fremont, Randy Ruppert decided it was time to get organized. He had already taken to calling his loose-knit group of anti-Costco protesters Nebraska Communities United, but there was little more to it than a web site where he shared information. To defeat Costco, he needed a formal structure, elected leaders, a real plan. Ruppert again called Jane Kleeb and asked if she could help him turn his ragtag group of family, friends, and neighbors into a durable opposition. Kleeb not only agreed, she went one better: She contacted Dave Domina, an Omaha attorney with a reputation as something of a white knight for small farmers in Nebraska. Domina has represented the landowners suing TransCanada over Keystone XL, corn growers suing the seed giant Syngenta after China rejected their GMO crop, and farmers suing Monsanto claiming that its herbicide RoundUp was damaging their organic crops. Domina even ran for U.S. Senate against Fremont native Ben Sasse in 2014—though in heavily Republican Nebraska, he was defeated in a landslide.
Ruppert hoped that the group would heed Domina’s advice and focus on environmental concerns. “This is not about nationalism,” he said. “This is not about closed-mindedness.” Ruppert told me that the real concern was whether Fremont’s wastewater-treatment facility was equipped to handle the demands of such a large poultry processor. Costco had already acknowledged that wastewater from the plant would contain fat, blood, ingesta, and fecal matter, as well as processing chemicals such as ammonia hydroxide, chlorine, and peracetic acid. The company promised that all solids and fats would be removed within the plant; the remaining waste would go to a complex of covered anaerobic lagoons, which the city agreed to finance and place under the supervision of the Nebraska Department of Environmental Quality.
But Ruppert pointed out that the whole system would use about two million gallons of water per day, and the lagoons were estimated to occupy a stunning 40 acres. If the Lower Platte ever flooded, as the Missouri did just a few years ago, these lagoons could release a deluge of toxic chemicals. Even normal runoff from hundreds of new chicken barns would threaten to carry nitrates and phosphorous into the river from the “litter” of the 150 million birds that Costco hoped to slaughter each year. This volume of waste created a larger, and very specific, public health concern. The Lower Platte is one of the primary sources of drinking water for Omaha and Lincoln, cities with a combined population of nearly a million people. Water quality is already a particular concern in historically black North Omaha and mostly Hispanic South Omaha. Should the Lower Platte suffer further damage from the meatpacking industry, the people who stand to suffer first and most are the state’s two largest minority communities.
Consider, then, the bind in which people in Fremont like Ruppert find themselves. While he disliked the prospect of being called a racist for allying his movement with an avowed Islamophobe like Wiegert, he worried that simply shunning Wiegert and the many townspeople who agreed with him would split the opposition to Costco—and ultimately speed the construction of a plant that would profit from immigrant labor and pose a direct threat to the health of people of color. “If you want to talk about being a racist,” Ruppert said, “this corporate model of farming is pure racism.” So, instead, he decided to see if he could convince his neighbors to stick to environmental grounds and leave immigration out of it.
The meeting Ruppert called with Domina was held at the home of Debby Durham, a friend who lived on the north side of Fremont. Durham had risen at the most recent Fremont city council meeting to express her opposition to Costco over concerns about avian flu—but also the worry she shared with John Wiegert. “Our town is in jeopardy, not only with disease,” she said, “but illegals. Eleven hundred of them.” Durham had invited Wiegert to the meeting, which Ruppert decided, after some debate, was a good idea. He didn’t share Wiegert and Durham’s views on immigration, but he had to admit that they were passionate opponents of the plant. If he could convince them to focus on the public safety concerns, then he thought they deserved a place in the group.
Once everyone was seated in the living room of Durham’s sprawling farmhouse, Domina rose to speak. If the group wanted to stop the chicken plant, he said, they had to focus on water quality, unfair contracts, and the abuses of public domain that almost always attend such projects. “Don’t talk about immigration; don’t talk about the workers,” he said, according to several people who attended the meeting. (Domina did not respond to repeated requests to comment for this story.) Wiegert told me that he assumed Domina’s remarks were made directly to him, but he says he remained silent. “Wasn’t my meeting.” To break the tension, Doug Wittmann, Ruppert’s brother-in-law, jumped in. “Look,” he said, pointing at Kleeb, “we have wacky liberals. We have wacky conservatives,” gesturing toward Wiegert, “and we have some Libertarians. But we can all fight this social disgrace.”
Wittmann says he wasn’t just trying to make peace. He honestly believes that people’s reasons for opposing the chicken plant—for assuming any stance on issues of public policy—are more complicated than the current political climate will allow. He conceded that “a lot of my kind of people”—in the Tea Party—were motivated by antipathy over Muslims coming into Fremont. But personally, he was concerned about something else. “The chickens,” he said. “I hate the CAFO idea.” Wittmann’s opposition to confined animal feeding operations arose from his hippy days—he had dropped out of college in the early 1970s and used his savings to buy a head shop in Lincoln—and also from his rediscovered Christian faith. (“A wise man considers the life of his beast,” he told me, quoting from Proverbs.) It would be easy to make assumptions about his motivations, he said, given his political affiliations, but it wasn’t so simple.
Still, when pressed on whether he thought Domina was right to insist on leaving immigration issues aside, Wittmann was ambivalent. “I don’t know why they thought that immigration was a losing idea,” he said. “Fremont voted 60 percent for having an illegal alien ordinance, even though we were being opposed and outspent, eleven to one, by the mayor, the council, the chamber of commerce, and all the businesspeople.” Logic dictated that opposition to the plant would only be strengthened by bringing ethnicity and immigration into the debate.
Wiegert, for his part, simply refused to be reined in. “Dave didn’t want us to talk about the workers,” he told me with a shrug. “I did anyway.”
Costco’s application to build the plant kept sailing through the Fremont city council’s approval process. The new site, on farmland just outside the city limits, was annexed into the town, then rezoned from agricultural to commercial, and then declared blighted and substandard, all maneuvers that allowed the town to offer Costco more than $13 million in tax incentives. When objections were raised over condemning productive farmland, the size of the allotted area was stretched to include some abandoned buildings close by. Through it all, people from Fremont gathered, in person and online, in attempts to unify over their opposition to the plant. But deep rifts were soon exposed.
Jerry Hart, one of John Wiegert’s co-petitioners on the anti-immigration ordinance, was a regular, and confrontational, presence on a Facebook group called Fremont City Council Watchdog. One local resident mocked him, writing, “yes, you should be so ‘proud’ of your illegal ordinance, that accomplishes nothing, and made Fremont a laughingstock.... And so many of those you think are illegal are not. They have as much a right to be here as you do. You, sir, are un-American.” Hart replied, “Illegal is not a race. Mexican is not a race. Muslim is not a race. You are stupid and you can’t fix stupid.”
When a Change.org petition was started online, those adding their names attached long signing statements. Between demands for environmental studies, complaints about “corporate greed,” and at least one call to “go vegan,” there were also overtly anti-immigrant comments. A local school-bus driver denounced “all these Hispanics or Japanese whatever kind of people that come in our country that don’t speak a word in English.” Mary Trehearn, a ninth-grade English teacher at Fremont High School, wrote, “The school system is sound. Property values are good. This will change everything. Taxes will skyrocket. Property values will plummet. Crime will increase. Illegal immigrants will pour in.... And who are we talking about bringing in? Muslims, Somalians, and Sudanese! Are we out of our minds????”
After similar statements at small-group community meetings hosted by the Greater Fremont Development Council, Walt Shafer from Lincoln Premium Poultry, a Costco subcontractor that would manage the plant, announced that no one from his company or Costco would attend a larger, citywide gathering scheduled to be held at a meeting hall south of the railroad tracks. “We’re not going to meet with a lynch mob,” Shafer said. After another meeting hosted by Nebraska Communities United in September, I was approached by a man named Gene Schultz, a member of Doug Wittmann’s Tea Party group, who told me that he had changed his position somewhat on Mexican workers at Hormel. “At least they’re not Muslims,” he said. “Muslims will cut your head off.” Later, I looked up his lengthy Change.org signing statement, which included a warning that if Costco were allowed to build its plant, the Muslim call to prayer would blare from loudspeakers in Fremont five times per day. “When hundreds of Somali or Syrian Muslims come to our town they will change our culture,” he wrote. “These are typically uneducated, sharia-loving Muslims that have no concept of our Constitution, laws, or modern way of life.”
Wiegert, meanwhile, was a regular presence at the city council meetings, loudly railing against Costco’s putative workforce. He was also a reliable source for inflammatory statements in the media. When reporters called, he always denied being a racist and instead talked about legal immigration and the rule of law, the impact that refugee children would have on local schools, and his belief that communities should be able to decide “who they do or do not want as neighbors.” Inevitably, though, he would veer off course, and insist that there is good reason to be afraid of Muslims, and Somalis in particular, given instances of terrorism around the world. He told one reporter he didn’t want a single Somali in his town: “Even if there’s one, there’s one too many.” He talked to the Omaha World-Herald, the state’s largest newspaper; went on American Public Media’s Marketplace; and even gave Katie Couric a guided tour of Fremont for a documentary series set to air next year on the National Geographic channel, driving her around the trailer park south of town where most of the “illegals” live.
Ruppert did everything he could to counter Wiegert. “Animosity needs to be set aside,” he told the local newspaper. “We need to proceed with the facts as we know them, and proceed in a way that does not divide the community.” And his environmental message did seem to have an impact—to an extent. In December, after repeated demonstrations that the land in question was in a flood plain—and barely a half-mile from the main channel of the Platte River—the city council approved tax-increment financing to assist with relocating the covered lagoons from the Costco site to land adjoining the wastewater-treatment facility, three miles away. But this short-term victory proved costly. With no council debate, the private farmland was earmarked for acquisition by the city via eminent domain—and with the improved sewage treatment, Costco then applied to expand the size of the plant onto additional land, increasing production to more than two million chickens per week. The new plan was unanimously approved by the city council. What’s more, after Donald Trump was elected president, the fear of HUD suing Fremont over its anti-immigration ordinance also eased; the city raided the $2 million legal defense fund and used the money to address infrastructure needs expressed by Costco.
Ruppert knew he needed help from large environmental groups to keep up. He couldn’t research the new site, look into the new sewage plan, and investigate the companies slated to complete the work; it was too much to do alone. He turned to national environmental organizations for assistance, but they were of little use. “I’m incredibly disappointed in the Big Greens,” Ruppert told me. “I don’t know what they do with their money. I truly don’t. We’ve had little to no help—a lot of ‘support’—but where are you on these issues? These are the issues that you’re supposed to be fighting for.” Without assistance from the outside, Ruppert and Nebraska Communities United reluctantly decided to shift their message away from opposing Costco on environmental grounds and toward fighting the city of Fremont on its use of eminent domain.
Doug Wittmann, for one, was delighted by the change. As a Tea Party true-believer, he strongly opposed the idea of government being able to take away private land for any reason. “The right to life, liberty, and the pursuit of happiness—or property—being among the rights that God gives us, is sacred,” he told me. Wittmann was able to rally renewed support from his group, and Dave Domina, who had extensive experience in eminent-domain law from working with landowners on the Keystone XL route, said he was willing to help with a legal challenge to this use of governmental power for private corporations. He agreed to draw up the language for a ballot measure to officially curb the eminent-domain powers of the city of Fremont.
The problem with compromises is that once you start making them, you may not be able to stop. To get Domina’s measure onto the ballot, the Costco opponents had to get enough people to sign a petition supporting it. And who among them had more experience in putting together walking lists and working with canvassers? Who had shown more tenacity in making calls and going door to door collecting signatures? It took some doing, but finally Ruppert agreed to Wittmann’s suggestion for the named lead petitioner: John Wiegert.
I visited Wiegert in his home one evening a few days before Halloween. The large flat-screen TV over the high-top table in the kitchen was tuned to Fox News. The weather had grown so suddenly cold that it had caught him by surprise. He hadn’t turned the heat on in his house yet. Still zipped into his jacket, Wiegert brewed a fresh pot of coffee and talked with little enthusiasm about going door to door with the eminent-domain petition. He had made some calls to his old political contacts and found a consultant willing to provide a list of phone numbers and addresses of people likely to add their signatures. That would allow him to call ahead and only go out to collect signatures from people who had already agreed to sign, rather than knocking on doors.
“Who would want to go out and walk tonight to get signatures?” he asked me.
I asked Wiegert if he had been given a script to follow, explaining the issues surrounding eminent domain. He said he didn’t need a script. He was legitimately concerned about a governmental power used to seize land and city dollars being put toward a facility built for the sole benefit of a private company. But he wasn’t about to deny his other motivations. “Right now, we’re fighting them on eminent domain,” he said. “I guess whatever it takes to get them out of Fremont, to stop it, I’m all for.”
I asked Jane Kleeb, who in June 2016 became chair of the Nebraska Democratic Party, if she thought that eminent domain was being used in Fremont as a fig leaf to place over another xenophobic campaign. She rejected that idea outright. She conceded that Wiegert had “warped views on immigrants” but bristled at the implication that working with him tainted her own efforts. “The only way that we’re going to win these big political fights is with unlikely alliances,” she said. “This was true on the pipeline, too. A lot of the farmers and ranchers have different political views on other issues, but we were all on the same page for stopping Keystone XL.” Kleeb also said she believed that these “unlikely” partnerships can unstick the political gridlock in a deep-red state like Nebraska—and perhaps even in America. “The more you work alongside each other, shoulder to shoulder, the more difficult it becomes to demonize each other,” she said. “That’s the biggest thing: You will start to change people’s minds—and your mind will start to change on certain things—even while we still disagree on some fundamental issues.”
It’s a noble sentiment—and one that I share, at least in theory. But I couldn’t ignore the darker forces that seemed to be driving at least part of Fremont’s opposition to the Costco plant. Earlier that year, I had reported on another heartland town roiled by an influx of Muslim workers at a meatpacking plant. According to the FBI, white supremacists had planned a large-scale bomb attack on the homes of a community of Somali refugees in Garden City, Kansas. If Fremont’s often vulgar opinions about Muslim outsiders, conflating them with terrorists, blossomed into violence, what responsibility would everyone involved bear for not loudly denouncing the xenophobic and Islamophobic rhetoric from the very beginning? Kleeb and Ruppert, and perhaps even Dave Domina, may hold out hopes of softening the hearts of their strange political bedfellows, but that night at Wiegert’s house, when I asked him if he had “evolved” by working with Nebraska Communities United, Bold Nebraska, and now the state Democratic Party, he dismissed the idea. He wasn’t interested in being converted to anyone else’s cause, and he didn’t expect to win converts to his.
“I’m not against Muslims,” Wiegert repeated. “It’s not like I hate them, but I will say: It’s a worry.” He glanced up at the TV. He had the sound off, but the Fox News talking heads still pantomimed their anger, while the scroll at the bottom of the screen demanded an investigation of Hillary Clinton’s Russian ties. “They come from rogue nations. And they don’t get how we live. So it can upset a city, upset a state. It scares me. It really does.”
Before Wiegert could get his petition campaign underway, the city executed yet another legal end run. To avoid an eminent-domain fight, they made a pre-emptive bid on the farmland neighboring the wastewater-treatment facility—reportedly offering the owners nearly three times what surrounding property typically sells for. Ruppert told me he had already talked to Domina and to Kleeb, and they were looking into guidelines governing how much a municipality can legally spend to obtain a property for a project. So the fight continues, but the more fundamental questions have yet to be reckoned with: Are environmentalists succeeding in winning the nativists over to their way of thinking? Or have they been co-opted into the xenophobic cause?
Wiegert, for his part, said he still couldn’t understand why anyone in Fremont would be more concerned about Omaha’s water than a Muslim moving in next door. “Heck,” he said, “you’ve got a president saying, ‘Let’s put a ban on these people until we can get something to make sure that these people coming in are coming for the right reason.’ ” He pointed at me, for emphasis, as if the cameras were watching. “The American dream.”
It’s been less than three years since the U.S. Supreme Court legalized same-sex marriage across the land, in a stirring, spotlight-grabbing decision handed down by Justice Anthony Kennedy. And yet, during Tuesday’s oral arguments for Masterpiece Cakeshop v. Colorado Civil Rights Commission, the case in which Colorado baker Jack Phillips refused to create a wedding cake for a same-sex couple, Kennedy’s line of questioning seemed to imply that it was the baker, not the couple, who had been discriminated against.
Speaking to the lawyer representing Colorado, Kennedy, who is expected to cast the crucial swing vote in the case, homed in on an offhand remark made by one of the state’s seven commissioners on civil rights at a July 2014 hearing:
Freedom of religion and religion has been used to justify all kinds of discrimination throughout history, whether it be slavery, whether it be the Holocaust, whether it be—I mean, we—we can list hundreds of situations where freedom of religion has been used to justify discrimination. And to me it is one of the most despicable pieces of rhetoric that people can use to—to use their religion to hurt others.
This quote is likely familiar to anyone supporting Jack Phillips in the case. Over the past several months, conservative Christian media outlets and advocacy groups have seized on it as evidence that the state was biased when it said he violated Colorado anti-discrimination laws.
In a September e-mail blast, Family Research Council executive vice president William “Jerry” Boykin urged the FRC’s supporters to stand with Phillips against the commissioner who “compared Jack standing up for his Christian faith to the actions of Nazis in the Holocaust!”
Earlier that month, the Christian Post ran a video segment showing Phillips walking through an American military cemetery and explaining that he could not possibly have discriminated against the couple in his bake shop, because his father was a soldier in World War II who landed at Normandy and later liberated a Nazi concentration camp.
It’s a common rhetorical twist, one that enshrines Americanness with an impenetrable purity by contrasting it with the country’s most evil and uncontroversial enemy. But in this case, we have to see this nationalist rhetoric through the lens of Christian conservatism. This conservatism is mostly white, and rigid on sexuality and gender norms. In a recent PRRI survey on American values, a majority of white evangelicals said they believed wedding-based businesses should be able to refuse same-sex couples. They were the only polled group with this majority view.
Cases like Masterpiece are a symptom of white conservative Christians feeling like they have become outsiders in what they believe is their country. Jack Phillips’s lawyers at the Alliance Defending Freedom have framed his and other similar cases as efforts to protect Americans who have been caught in the crosshairs of Kennedy’s Obergefell decision and LGBTQ rights advocates. They’re trying to protect themselves against what they perceive to be the new dominant cultural norm.
But that new norm hasn’t yet extended to the full protection of the rights of LGBTQ individuals. As Sarah Jones noted in a preview of Masterpiece for The New Republic, LGBTQ people are not a federally protected class, and if the Supreme Court rules against them it will undermine an already patchwork network of local anti-discrimination protections.
Furthermore, Masterpiece is not simply a case of a baker with deeply held religious views. It’s a case of a white Christian baker. It’s hard to imagine the same arguments gaining traction or sympathy with the ADF if Phillips were Muslim—at the very least, the ADF has not emphasized the rights of religious Muslims or recruited Muslims to challenge anti-discrimination ordinances. The group remained silent on the Trump administration’s travel ban except to tell the Ninth Circuit court that the president’s campaign-trail tweets and anti-Muslim rhetoric should not be included in its analysis.
In fact, a ruling in favor of the cake shop may later enable discrimination against members of minority faiths, as Columbia Law School professor Katherine Franke has pointed out. “Religious minorities ... depend on non-discrimination laws to make it possible for them to work,” she said Tuesday. “Religious liberty is at risk with the argument that this narrow sect of people is making.”
But this perceived victimhood on the part of conservative Christians may have found favor at the high court.
While it may not be the case that the justices’ religious views impact their interpretation of the case, their conservatism definitely does. It allows LGBTQ people to be seen not as a protected class, but rather as a new political group with a controversial agenda. During Justice Kennedy’s interrogation of potential religious animus, Justice Samuel Alito chimed in to say, “One thing that’s disturbing about the record here ... is what appears to be a practice of discriminatory treatment based on viewpoint.”
“It’s okay for a baker who supports same-sex marriage to refuse to create a cake with a message that is opposed to same-sex marriage,” Justice Alito went on to say. “But when the tables are turned and you have the baker who opposes same-sex marriage, that baker may be compelled to create a cake that expresses approval of same-sex marriage.”
Here, Alito is treating the approval of or opposition to same-sex marriage simply as a controversial political viewpoint and not as policy question that might relegate certain citizens to second-class status. When he says “a cake that expresses the approval of same-sex marriage,” he means a cake for a gay couple.
Meanwhile, the commissioner’s suggestion that religion has been used as a basis for discrimination—a historical fact—was cast as an attack on Jack Phillips’s Christian identity. “This case comes down to who you view is the aggrieved party,” said Greg Lipper, a former senior litigation counsel for Americans United for Separation of Church and State. “If you take a step back, until 2003, it was legal for states to prohibit gay people from having sex. Until 2015, it was legal to prohibit gay people from getting married. LGBT people continue to be subject to higher rates of discrimination, harassment, suicide, and all of those things.”
Kennedy’s questions on Tuesday, however, made it sound as though he believes the Christian wedding vendors are the aggrieved ones. “Counselor, tolerance is essential in a free society,” said the justice. “And tolerance is most meaningful when it’s mutual. It seems to me that the state in its position here has been neither tolerant nor respectful of Mr. Phillips’s religious beliefs.”
Supreme Court justices can be hard to read, but this line of questioning indicates that Kennedy might be swayed by the baker’s case, according to Lipper, who said “the fact that he seems viscerally more concerned with the bakery than with the discriminated against couple is not a good sign.”
The Republican tax reform plan that was passed in the dead of night last weekend is not really a tax reform plan. Yes, it brings the corporate tax rate down to 20 percent, but it doesn’t do the one thing genuine tax reform is meant to do: simplify the tax code. Instead, if signed into law, the bill will make America’s already impossible-to-navigate tax code even more byzantine, likely requiring an expansion of the IRS to collect the necessary revenue. Its real innovation is political: The bill functions as a kind of enemies list, a grab bag of tax increases and other policy changes aimed squarely at Democratic constituencies.
Republicans have been open about this strategy. “It’s death to Democrats,” Trump campaign economic adviser Stephen Moore told Bloomberg. “They go after state and local taxes, which weakens public employee unions. They go after university endowments, and universities have become play pens of the left. And getting rid of the mandate is to eventually dismantle Obamacare.” Republicans have signaled that, having pushed a tax plan that would juice the deficit, they now plan to turn to entitlement reform to reduce that deficit. It’s an ambitious plan. This tax bill allows Republicans to continue their assault on Democratic states and constituencies and undo Democratic accomplishments, from New Deal programs to Obamacare.
Owing to the Senate’s arcane rules, a barrage of last-minute dealmaking, and the furious pace at which it was passed, this tax reform bill is a convoluted mess. But it’s also a revealing one, showing that the Republican Party is moving away from past priorities, like lowering the deficit, and towards an all-encompassing drive to explicitly punish their political opponents. In the tax reform bill, Republican constituencies, particularly the wealthy and corporations, are richly rewarded, while Democratic ones get the shaft. The bill, along with its messaging, are openly anti-Democratic in a way that is different from past Republican legislation. A handful of Democrats, for instance, voted for the Bush tax cuts.
The most notable change, from this perspective, is the removal of the state and local income tax (SALT) deduction, which adversely affects wealthier taxpayers in states like New York, New Jersey, and California. To an extent, this is unsurprising, given that a New York Times analysis found that “none of the senators representing the top 10 states taking the SALT deduction are Republicans.” Moreover, the areas that are hardest hit by this change are in urban areas like New York, San Francisco, and Washington, D.C.—all of which went to Hillary Clinton in the 2016 election by commanding margins. According to analysis from the Institute on Taxation and Economic Policy, New York, New Jersey, Maryland, and California would pay $17 billion more in taxes by 2027, while Texas and Florida, two large states that Trump won, would pay $31 billion less. “You can definitely see the ideological tilt here,” Carl Davis, the institute’s research director, told The Atlantic’s Ron Brownstein.
Under this provision, the wealthier are harder hit—most lower-income taxpayers who take SALT deductions would benefit from a larger standard deduction. Nevertheless, because property taxes fund public education, it could have a dramatic impact on lower-income people, too. “It would jeopardize the ability of state and local governments to fund public education,” NEA President Lily Eskelsen Garcia said in a statement expressing opposition to the House’s tax reform bill. “That will translate into cuts to public schools, lost jobs to educators, overcrowded classrooms that deprive students of one-on-one attention, and threaten public education.”
Fewer than 10 percent of all American children attend private schools, and a dwindling number of those are from middle- or working-class families. The GOP Senate bill, however, expands 529 savings programs—which were designed to encourage families to save for college—to apply to private elementary and high schools. While public school budgets will come under the ax, parents of private school students using 529 plans will pay tuition tax-free.
The House’s tax plan also targets college students, who voted for Clinton by large margins, by taxing tuition waivers as income—a change that could make graduate school unaffordable for many students. There are huge incentives for companies to embrace automation, which could have disastrous effects for the American manufacturing workers that Trump promised to help. Senate Republicans declined to include the bolstered Child Tax Credit proposed by Senators Mike Lee and Marco Rubio, which would have provided a modest cash benefit to low-income families. Meanwhile, the Senate bill doubled the estate tax threshold from $11 million to $22 million, which would provide a windfall for many rich people.
The biggest changes are yet to come. The Republican tax plan will add at least $1 trillion to the deficit, and Republicans are readying welfare and social service reform in response, which would mean that they would be demanding austerity measures to pay for a massive corporate tax break. Social Security, Medicaid, and Medicare—three programs championed by Democrats—could all be in trouble thanks to this bill.
This is all part of a larger truth about the Republican tax plan: Over the next ten years, many, if not most, lower- and middle-income taxpayers will see tax increases, so that Republicans can pay for their corporate tax cut and other giveaways to the rich. This, in a post-recession environment in which corporate America and the 1 percent have siphoned off huge gains, while wages for workers have been comparatively stagnant. This is not a winning electoral message. It would make sense, then, for Republicans to embrace Moore’s rhetorical strategy of noting that the tax reform plan sticks it to the Democrats. Tax reform is a nakedly political bill that attacks the GOP’s political enemies—and that message may be the best Republicans can offer in an era when tribal enmities are the single biggest force in politics.
This hastily conceived tax reform bill—some of which was sketched out in pen hours before the final vote—was not designed for policy reasons, whatever Republicans may say. Trickle-down economics has been revived with the same indifference to real-world policy that characterized Mitch McConnell’s assertion that this bill would actually be “beyond revenue neutral.” No, this is about politics. It’s the clearest evidence yet that today’s GOP has two animating principles: giving more to the wealthy (which appeases their donors) and owning the libs (which pleases their base).
The tweet was clearly meant to scare me. Jack Posobiec, an alt-right bottom-feeder best known for promoting the Pizzagate and Seth Rich conspiracy theories, issued this ominous warning on November 25:
Posobiec is an ally of the notorious Roger Stone, famed for his political dirty tricks, and the tweet echoed the phrase “time in a barrel,” which Stone had tweeted prior to scandals breaking out about Democratic Party bigwig John Podesta (the Wikileaks email leak) and Senator Al Franken (the accusations of sexual assault). I wondered what Posobiec could possibly have on me. The most shocking thing about my emails is my tardiness in answering them; I have hundreds of orphaned missives in my “drafts” folder. My sex life, for better or worse, is hardly tabloid material. And I have never sexually harassed anyone. Maybe Posobiec, being a fabulist, was planning on inventing a lurid tale of depraved behavior?
Posobiec’s smear job, which came a few days later, was weak tea. He had dug up some tweets I had written in 2014 and 2016 offering a partial defense of conservative political scientist Tom Flanagan and fired Nintendo employee Alison Rapp, both of whom had heterodox opinions on child pornography. In my tweets, I argued that the law should distinguish between child porn that is a work of the imagination (where no one is harmed in the production) and child porn that is a work of reproduction (where actual children are hurt); that possession of child porn might best be dealt with through therapy rather than jail; and that Flanagan and Rapp shouldn’t be fired for arguing for changes in child porn laws. Posobiec spun this into a defense of child porn, and argued that I was a hypocrite for criticizing Alabama senatorial candidate Roy Moore, who faces multiple credible accusations of child molestation.
We live in an age of weaponized outrage, where bad faith actors use out of context statements to get people fired. But I had little to fear from Posobiec’s attack, not just because he was attacking perfectly reasonable opinions. I was also safe because my employer, The New Republic, is secure in its identity as a journal of opinion and has owners who can recognize a right-wing character assassination when they see it.
Sam Seder hasn’t been so lucky. An MSNBC contributor, he was the target of a mudslinging campaign by alt-right media personality Mike Cernovich, a Posobiec ally and fellow Pizzagate fabricator. The campaign against Seder followed the same script as Posobiec’s smear job against me: digging up an old tweet and accusing Seder of alleged hypocrisy for criticizing Roy Moore and President Donald Trump for their sexual offenses. But Seder’s bosses at MSNBC didn’t stand by him, revealing an increasing danger to free speech in the age of the internet troll.
In 2009, many mainstream voices, notably Washington Post columnists Anne Applebaum and Richard Cohen, defended Roman Polanski against rape accusations, arguing that he was the victim of judicial misconduct and noting that the victim (who was 13 years old at the time of rape) had forgiven Polanski. The broader defense of Polanski, in Hollywood and beyond, also seemed partly out of respect for the director’s artistic genius. Thus, Seder tweeted:
Seder’s tweet is clearly a sarcastic jibe directed at Polanki’s apologists. But Cernovich and his allies ginned up a controversy, writing to advertisers at both Seder’s podcast and at MSNBC. The campaign against Seder was clearly done in bad faith, not only because it twisted the meaning of his words but also because Cernovich has a history of making genuinely sinister comments on rape. (Cernovich was accused of rape in 2003, but the charge was dropped and he was convicted of battery.)
New York magazine’s Jesse Singal, in a post on Tuesday, acknowledged how Seder’s tweet and the campaign against him looks from “the point of view of a corporate employee at MSNBC, or at one of Seder’s advertisers.” But, he added, “things look quite differently if you have a little bit of knowledge about the denizens of the far-right internet and how they operate. If you know, for example, that Cernovich, the ringleader of this operation, has said things like, ‘Have you guys ever tried “raping” a girl without using force? Try it. It’s basically impossible. Date rape does not exist,’ or ‘A whore will let her friend ruin your life with a false rape case. So why should I care when women are raped?’—you might be a little less likely to take his and his fans’ concerns seriously.”
Corporate fear prevailed, and MSNBC cut ties with Seder.
While an appalling decision, it’s not surprising. We live in an age where media outlets of all sizes are increasingly jittery. The rise of the internet has shriveled traditional sources of advertising revenue, while political attacks (notably Trump’s “fake news” refrain) have deepened public distrust of the media. In meantime, the Weinstein effect has revealed many media icons, from Matt Lauer to Charlie Rose, to be sexual harassers. In this nervous environment, it’s all to easy for sleazy operators like Cernovich to manufacture a scandal and get someone fired.
But it’s not just the rancid right doing so. MSNBC host Joy Reid has been targeted over homophobic blog posts she wrote nearly a a decade ago. The main instigators of the campaign against her are Bernie Sanders supporters who are angry over Reid’s full-throated support of Hillary Clinton during last year’s Democratic primary. Unlike the Seder fabrication, the Reid case has some basis in reality. She did direct homophobic jibes against politicians like Florida Governor Charlie Crist. But Reid has apologized for her comments, an atonement Crist has graciously accepted. Firing Reid, as some are calling for, is a punishment far in excess of the crime.
Sanders supporters, and others on the left, need to rethink the way they weaponize outrage and try to get people fired. Mike Cernovich’s campaign against Seder shows how setting a low bar for fireable offenses will lead to the silencing of important progressive voices. Media companies also have to be better prepared for manufactured social media outrages, developing policies that make a distinction between speech (including offensive and controversial speech) and actions. To fire sexual harassers is an imperative, but that’s very different than firing a commentator for making a pungent point about rape.
One big problem, as CNN’s Andrew Kaczynski notes, is that the management of large media operations aren’t familiar with how social media works. “Executives/editors in corporate media need to understand the bad faith actions here and world of social better,” Kaczynksi tweeted. “Part of the problem here could be people like the executives at NBC/MSNBC don’t even understand what was going because they don’t even operate in the same world as many of these people.” And until these executives and editors do understand this world, charlatans like Cernovich will exploit the newfound sensitivity to sexual harassment to limit free speech.
I’ve never seen somebody smoke inside an ice rink before. But then I’ve also never seen a definitive account of what happened between Tonya Harding, Nancy Kerrigan, Jeff Gillooly, and Shawn Eckhardt on January 6, 1994. Both unusual sights came courtesy of I, Tonya, the new biopic of Harding. The smoking was done by Allison Janney as Lavona, Harding’s mom, who beat and humiliated her supposedly in order to extract her best performances. (“She skated better when she was enraged,” we learn.) These details come from testimony by Gillooly, Harding’s ex, who also beat our ice-dominating heroine during their awful marriage. From his lips the screenwriter Steven Rogers has put together the who, how, and why of Kerrigan’s notorious beating in the hallway of a Detroit ice rink.
Harding herself apparently had nothing to do with it—the beating was the result of botched communications between Gillooly, his idiot friend Shawn, and some thugs they’d hired. But she was convicted of obstructing the investigation after the 1994 Lillehammer Olympics, and her career on ice was abruptly cancelled. “I ruined her career,” Gillooly (played by Sebastian Stan) says in I, Tonya.
The narrative certainty in this movie marks it as distinct from the 2014 ESPN 30 For 30 special “The Price of Gold,” which reignited interest in the Harding legend but featured no Gillooly. Though that documentary made no claim to solving the mystery of the Kerrigan-bashing, it was compelling precisely for its genre features. We saw Harding herself on screen, laughing, holding fast to her version of events, with occasional and delightful lapses into badmouthery. In between, there was archival footage of Harding on the ice. Good though her avatar Margot Robbie is in I, Tonya (helped by stunt actors and special effects artists), the on-ice action of the new movie will never match the thrill of the real thing.
Harding completed a triple axel in the U.S. Figure Skating Championship in 1991—the first woman to do so—and it is very special to watch. Ice skating is great to listen to, as well. Harding’s superathletic and aggressive style sounded distinctive, raspy. When she lands from a big jump the sound is like ice being sculpted with a saber. Moving that fast on ice is very dangerous and very cool. It is not as dangerous and cool when it is not being done for real.
That documentary also enjoyed a great deal of archival press footage, since the Harding-Kerrigan scandal was one of the first perma–cable stories. Journalists hounded Harding day and night, with one unscrupulous hack calling a service to tow her van, just to get her out of the house. The story was huge. Kerrigan and Harding were both talented working class athletes, but the judges favored Kerrigan for her Brooke Shields elegance and styling. Harding, although the more powerful skater, liked to dance to ZZ Top. And of course, there is the famous footage of Kerrigan right after she was batoned in the thigh by a hired goon. “Why? Why?” she screeches, laid out on the corridor floor.
In place of the real thing, I, Tonya injects style. Riffing on the narration-plus-archival-footage technique in “The Price of Gold,” director Craig Gillespie (Lars and the Real Girl) weaves talking-head voiceovers into flashback scenes. A character will occasionally turn to the camera during a contested event and say, “This didn’t happen.” Even better is a glorious training montage during Harding’s second run at the Olympics. “She actually did this,” her coach Diane Rawlinson (Julianne Nicholson) tells us, as Tonya runs up a mountain with two water coolers strung milkmaid-style to a pole over her shoulders. The movie’s best line is “Shit is a dish best served never.” The movie’s best shot follows Tonya out the window of her abusive husband’s house and away, away, away down the lane.
There are aspects to this story that I, Tonya cannot catch. Robbie is a solid comic actor, but she’s too pretty to play Harding. The actor who plays Kerrigan (Caitlin Carver) looks nothing like her. In this story the likeness matters, because the girlish coarseness of Harding’s features playing against Kerrigan’s delicacy was a crucial ingredient in their cable-saga enmity. Kerrigan bore an amazing similarity to two other beauties of the 1980s and 90s, Lara Flynn Boyle and Jennifer Connelly. Harding was whiter white trash, with her frizzy hair, braces, and love of bad rock. That’s mostly lost in the movie.
But I, Tonya survives those production difficulties due to the Shakespearean stakes of its real-life plot, as polar and duetto as Alexander Hamilton and Aaron Burr, or Al Capone and Bugs Moran. Harding is essentially a tragic antihero, a physical genius whose career was both created and destroyed by abusers: first her mother, then her husband. She was beaten and lied to and degraded at the same time that she very literally ascended to heights on the ice that no other woman had ever achieved. After her career was over, she tried boxing for two years, to little success. (In 2010 she did pull off one more feat, setting a land speed record for racing a vintage gas coupe, in a 1931 Ford Model A named “Lickity-Split.” Her love for fixing up trucks is emphasized in I, Tonya.)
When it all came tumbling down, the media subjected her to a second trial. “It was like being abused all over again,” Robbie-as-Harding says into the camera during an interview segment. “Only by you.” Harding’s story is perfect for a movie because it is full of plot twists, and consists of three major acts—childhood, extraordinary success, despair—that are framed through the lens of reminiscence. It is also built around a toothsome arc of the De Casibus Virorum Illustrium kind: The inevitable turn of fortune’s wheel must crush Tonya Harding, as it crushed King Arthur and Alexander the Great.
Making his first presidential visit to Utah on Monday, Donald Trump went out of his way to praise the state’s octogenarian senior senator, Orrin Hatch. “You are a true fighter, Orrin,” Trump said before a crowd at the state capitol in Salt Lake City. “You meet fighters and you meet people you thought were fighters but they’re not so good at fighting. He’s a fighter. We hope you will continue to serve your state and your country in the Senate for a very long time to come.”
This might seem routine—a Republican president praising a veteran Republican lawmaker in his home state—but the political press knew something else was afoot. The 83-year-old Hatch, the longest-serving Republican senator in history, is eyeing retirement after seven terms. If he does so, former GOP presidential nominee and current Trump critic Mitt Romney may run for the seat. Polls show he’d be a shoo-in.
“Mitt’s a good man,” Trump insisted to reporters on Monday. But the Associated Press reported that “privately, Trump has signaled support for an effort to submarine Romney. Trump has vowed to try to block Romney, whom he views as a potential thorn in his side in the Senate, according to a White House official and an outside adviser who have discussed the possible bid with the president.”
But how much does Trump really have to fear from Romney in the Senate?
On the one hand, a Senator Romney could cause the president headaches, as Monday demonstrated. Hours after Trump endorsed alleged child molester Roy Moore in the Alabama Senate race, Romney reiterated his opposition to Moore’s candidacy:
Romney has also challenged Trump’s violation of American political norms and moral equivalency in Charlottesville. I’ve even argued that his ascension to the Senate could be good for the country. But National Review editor Rich Lowry sees an irony in Trump’s fear of Romney and embrace of Moore: “Romney would be a more reliable vote for the lion’s share of the Trump agenda than Roy Moore, who isn’t going to be a reliable vote on anything.” (Moore opposed the Republican plan to gut Obamacare earlier this year because it didn’t go far enough.) In a piece last month, Lowry noted Moore’s contempt for Senate Majority Leader Mitch McConnell, whom Trump desperately needs to secure any legislative accomplishments:
If Moore were in the Senate, he’d presumably be a reliable Republican vote like any other Alabama senator. The only difference is that he hates McConnell. Is that worth the reputational risk to the party of being associated with such a compromised figure? If there is a new Republican Senate leader in the next Congress, he sure as hell isn’t going to be a bomb-thrower (Senate leaders never are). So what’s the point?
National Review senior editor Jonah Goldberg was even more explicit back in September:
[Moore] will also almost surely say or do things that will encourage Republican senators in more moderate states to disassociate from Moore even when the actual policy position is right. Republican senators who need votes from independents and moderate Republican voters will not enjoy being linked to Moore in ads from Planned Parenthood and being asked by hostile reporters whether they agree with their Republican colleague’s views. In this and in myriad other ways, Moore will make it harder for Senate leadership to get things done — whether that leader is McConnell or someone else.
Lowry says Romney would be reliable on confirming conservative judges and advancing most GOP policy priorities, whereas Moore is a wildcard. Yet Lowry acknowledged that Romney wouldn’t have any personal loyalty to Trump, if the president faces potential impeachment. That’s why some prominent Democrats think Trump is right to fear Romney. “The fact that there’s more or less agreement on a number of issues is probably less important,” Democratic pollster Stan Greenberg told me. He believes that Romney, who famously called Russia our “number one geopolitical foe” in 2012, wouldn’t hesitate to challenge Trump on foreign policy.
Democratic strategist Tad Devine, who worked for Senator Ted Kennedy when Romney unsuccessfully challenged him in 1994, told me Romney “would be with Trump in terms of policy.” But he thinks the president may also be worried about his re-election. “If Mitt Romney became a member of the United States Senate, I think overnight he would become the number one challenger to Trump for the nomination,” he said. “I think Trump views him as a threat, as he should.” Under the right circumstances, Devine said, “I could see Romney standing up and leading people who want to impeach the president or push him out.... He’d be very quick to move to that posture.”
Surely some Democrats even welcome the prospect of a Senator Romney, assuming the alternative is Hatch or someone resembling the junior senator, Mike Lee. (Utah hasn’t elected a Democratic senator since the 1970s.) As he was leaving the Capitol on Monday night, Congressman Seth Moulton told me “the president should be scared” of Romney. “I don’t agree with Mitt Romney on everything,” the Massachusetts Democrat said, “but he is a leader and the president is not.”
But as evidenced by the current tax legislation—which Romney likely would have supported—there are limits to how much any GOP senator can contribute to the resistance. Democratic strategist Lis Smith described Romney as “NeverTrump before NeverTrump was cool,” and she agrees that his stature would allow him to take Trump on in a way most Republicans can’t. Yet Smith said Trump’s economic polices are “virtually indistinguishable” from those Romney ran on in 2012. “I see this purely as a matter of personality differences,” she told me. “Having a Mitt Romney in the Senate of course is better than having a Roy Moore in the Senate, but he’s still a Republican.... No Republican is going to save us in this fight.”
If nothing else, Democrats can take comfort that having Romney and Moore in the Senate would haunt Trump in different ways. “Romney is an enormous thorn in Trump’s side,” American Enterprise Institute scholar Norman Ornstein told me. “Moore is a constant reminder of Trump’s own sordid history.”
“Bitcoin is the World’s Hottest Currency, but No One’s Using It,” the Wall Street Journal proclaimed on Saturday. The day prior, the digital currency had surged past $10,000 per coin—and then past $11,000, too. But despite Bitcoin’s value, the paper explained, brick and mortar stores have been slow to accept it as a method of payment. Thus, some observers are becoming pessimistic about whether this tech-hipster cryptocurrency, which everyone has heard of but most people don’t truly understand, will ever replace traditional currency. “I don’t think it will be a currency,” bitcoin investor Alex Compton told the Journal. “If people use it as a currency, it will lose value as an investment.”
No one may be using Bitcoin, but we’re all paying for them. Bitcoin analyst Alex de Vries, otherwise known as the Digiconomist, reports that the coin’s surge caused its estimated annual energy consumption to increase from 25 terawatt hours in early November to 30 TWh last week—a figure, wrote Vox’s Umair Irfan, “on par with the energy use of the entire country of Morocco, more than 19 European countries, and roughly 0.7 percent of total energy demand in the United States, equal to 2.8 million U.S. households.” (As of Monday, the figure had reached nearly 32 TWh.) Just one transaction can use as much energy as an entire household does in a week, and there are about 300,000 transactions every day. That energy demand is more often than not met through fossil fuel energy sources, which, along with polluting air and water, emit greenhouse gases that cause climate change.
In other words, Bitcoins are contributing to the warming of the atmosphere without providing a significant public benefit in return. Some Bitcoin enthusiasts claim that it will eventually become a mainstream currency, and that the cryptogovernance system upon which it’s built could actually help the environment. But the Bitcoin market is volatile, its future murky. We only have 32 years left for carbon emissions to peak and then rapidly decrease, if our planet is to remain livable. We don’t have time or resources to waste on Bitcoin.
Unlike cash, a Bitcoin cannot be printed or otherwise “made” by a human. They exist solely in digital form. In order to create one, a computer must access the Bitcoin network and solve a complicated math problem, a process known as “mining.” But there are a finite number of Bitcoins that can be mined—21 million, to be exact—and as more Bitcoins are mined, the math problems get more challenging. Thus, computers must work harder—that is, process more information—in order to solve the problem and mine a Bitcoin. (This Bitcoin can then be sold and re-sold online.)
Off-the-shelf personal computers used to be powerful enough to mine Bitcoins. Now, because the math problems are so complex, they must use specialized hardware called Application Specific Integrated Circuit, or ASIC. These mining machines are big and run hot, and the people who use them—either Bitcoin mining companies or Bitcoin enthusiasts working together—use a lot of electricity to do so. Companies and organizations that mine bitcoin will sometimes have thousands of these machines packed into expansive warehouses. In 2015, Vice profiled a Chinese Bitcoin mining facility that spent $80,000 per month on electricity for these ASIC miners, in order to produce 4,050 bitcoins in the same period.
Because electricity is such a big expense for Bitcoin miners, companies often seek to establish themselves in places where electricity is cheap—and dirty. From Vox’s Irfan:
A study from the University of Cambridge earlier this year found that 58 percent of Bitcoin mining comes from China, describing “an arms race amongst miners to use the cheapest energy sources and the most efficient equipment to keep operators profitable.” Cheap power often means dirty power, and in China, miners draw on low-cost coal and hydroelectric generators. De Vries analyzed one mine in China whose carbon footprint was “simply shocking,” emitting carbon dioxide at the same rate as a Boeing 747.
Bitcoin has been criticized for its energy use for years. In 2013, Bloomberg deemed it “a real-world environmental disaster,” asserting that the mining process used $150,000 worth of electricity a day. Criticism has grown louder as more coins have been mined—from approximately 11 million in 2013 to nearly 17 million today. “Since 2015, Bitcoin’s electricity consumption has been very high compared to conventional digital payment methods,” Christopher Malmo explained recently in Motherboard. “This is because the dollar price of Bitcoin is directly proportional to the amount of electricity that can profitably be used to mine it.”
But De Vries’s analysis of Bitcoin’s energy use has been criticized, too. Marc Bevand, an Bitcoin investor, told Irfan that he suspects the currency’s global energy use “was likely closer to 15 terawatt-hours, which is still a huge amount of electricity, but half of the estimate on Digiconomist.” Bevand also noted that mining computers will surely become more energy efficient over time; after all, it’s not like companies want to spend that much money on electricity.
Some dismiss the environmental case against Bitcoin completely. Writing in Forbes in 2013, Tim Worstall called the argument “desperate,” dismissing the currency’s energy use at the time as “simply trivial.” He added, “at some point Bitcoin mining will stop. There is an upper limit to the number that can ever be mined... Thus this energy consumption will not go on rising forever.”
Four years later, though, it’s still rising. And according to de Vries, there’s no sign that it’s going to stop any time soon.
Bitcoin was originally pitched as a benefit to society; a way to eliminate the corporate middle-man (banks) from financial transactions, and instead use the Bitcoin community (known as “the blockchain”) to ensure the validity of payments. This would create a sort of utopia where public trust is restored to the financial system.
Indeed, if Bitcoin could evolve to become what it was intended to be—a way to complete day-to-day financial transactions without the involvement of banks—some say it has the potential to do enormous good. Portia Burton, who runs the blockchain explainer site Bits and Chains, speculates it could be used to prevent atrocities like slave labor in the seafood industry. “With the blockchain, all transactions are identified and sent to an open ledger where governments, companies, and consumers are able to track the origins of their seafood,” she wrote in January. “Suppliers who don’t identify their fish can be actively avoided.”
Writing in the journal Nature, wildlife researcher Guillaume Chapron made the more complicated argument that the environment needs cryptogovernance. “Bitcoin demonstrates that banks and governments are unnecessary to ensure a financial system’s reliability, security and auditability,” he wrote. “For sustainability, blockchain technology could be a game-changer. It can generate trust where there is none, empower citizens and bypass central authorities. It could also make existing institutions obsolete, including governments, and raise fierce opposition. Laws could be replaced with ‘smart contracts’ written in computer code.”
But these arguments flounder if Bitcoin’s promise is not fulfilled. Bitcoin is becoming more and more valuable, but only to people who see it as a wise—or entertainingly risky— investment. Most people use Bitcoins as a way to make money, rather than using it as money itself. In a way, buying a Bitcoin is no different than investing in an unpredictable stock on NASDAQ, but the cost to the planet is immeasurably worse. As it fails to address one societal ill, it’s contributing a staggering amount to another one.
From the outside, the Louvre Abu Dhabi announces grand ambitions. Jean Nouvel’s design for the museum is intricately otherworldly. Its domed roof, crisscrossed with a geometric pattern, hovers above the 55 rooms that house the art. Built at the water’s edge, the museum seems to float on the sea like a saucer that has just landed. Inside, the collections tell a global history of art, assembling paintings, sculptures, and artifacts from all over the world, and arranging them around themes like early man or the rise of world religions. All over Abu Dhabi, along its highways and in its airport, bold advertisements for the museum proclaim that visitors will “see humanity in a new light.”
The museum, which opened in November, fits with a wider
cultural movement in the Gulf. Over
the past decade, the
region has begun a sort of cultural arms race, one that counters its image as a
wealthy desert with little culture
or history. I.M. Pei’s Museum of Islamic Art opened in 2008 in Doha, and
when it is completed
in a year or so, Jean Nouvel’s desert-rose design for the National Museum of
Qatar may even surpass his vision for the Louvre. The Sheikh Zayed Mosque’s bulbous
domes and gilded interiors
have made it a destination
for tourists since its opening in 2007, as have Dubai’s Burj
Khalifa (currently the world’s tallest building), the Emirates Palace Hotel
(where coffee is served with gold shavings), the “seven-star” Burj Al-Arab
Hotel, and the region’s beacons of capitalism: the Dubai, Ibn Battuta, and
These projects are often questioned on two counts. The first
count is artistic: Can they tell an authentic, original story about history and
culture? New cultural institutions in the Gulf are often
criticized as imitations
or mere outposts of Western museums. And indeed many of the these new projects are being
developed in partnership with Western organizations. In Abu Dhabi alone, there
are now campuses of NYU and the Sorbonne, a hospital run by the Cleveland
Clinic, innumerable KFCs and Pizza Huts, the French grocery chain Carrefour, and now the Louvre. “Everything is imported here,” is a complaint I heard any number of times during my stay
in Abu Dhabi last month, even from people who have lived there for decades. So
long as Western institutions continue to dot its landscape, the country’s own history
will likely continue to be discounted or ignored.
The second type of criticism is political. Does the sheen of
these new institutions serve as a cover for the region’s deep economic and
political inequalities? Funding for the Louvre Abu Dhabi has
been linked to the sale of arms, and critics have
observed a disparity between the global elites this museum covets as an
audience and the migrant laborers who constructed it. Does it portray humanity
in a new light, or does it obscure reality for many in the modern Gulf?
The Louvre Abu Dhabi’s success will depend at least in part on whether its narrative about art is convincing. Its leadership has made a concerted effort to control this story very carefully: The art galleries do not have an open floor plan. Instead, visitors must follow the official path of the exhibition in order, from chapter one (“The First Villages”) to chapter two (“The First Great Powers”) up until chapter twelve (“A Global Stage”), where they can then exit. Humanity, the exhibition seems to argue, is essentially cosmopolitan; the cultures of the world have always mixed, progressing smoothly toward the great melting pot of globalization. When, during a visit to the museum last month, I tried to stray from the show’s prescribed path, a guard stepped in and gestured towards my next official location.
The Louvre Abu Dhabi is also working with a fairly limited range of artifacts. In its inaugural show, the museum displays 600 objects. Three hundred are owned by the museum, and three hundred are loans from French museums, including the Louvre in Paris. (In 2007, the U.A.E. agreed to pay France $1 billion for loans of art and the Louvre name for 30 years. Jacques Chirac hailed the deal as a way to bridge cultures, though the French art cognoscenti were barely able to hold their noses.) To put its 600-artwork collection in context, 35,000 paintings are currently on display at the Louvre in Paris. The quality of the art is, of course, world-class—and there is, mercifully, no Mona Lisa-esque painting that visitors will jostle in front of, just to take a picture.
The museum’s thematic approach to art history has a few kinks to work out. Sometimes it’s too easy to see universals in human history: Three pots from three different civilizations, sitting next to each other, are supposed to demonstrate our “shared humanity.” The same goes for displays of early currencies, jugs, and sculpture. “Heads,” a French tourist said to her daughter as they walked by a few ancient Roman busts. “Lots of heads.” Indeed. These early chapters seem primarily concerned with pointing out that different civilizations—Greek, Chinese, Egyptian—used similar coins or buried their dead in similar fashion. Too often, the museum guide or wall text fails to provide any details about the local context for these objects—any acknowledgement of the specificity of the cultures that produced them—and instead falls back on the notion that no matter where we come from, we’re all the same.
When the exhibits present thoughtful detail, however, they succeed.
For instance, in the
room titled “Challenging
by Josef Albers, Mark Rothko, and Sayed Haider Raza are displayed side by side. The paintings—gray, orange, and black-brown blocks of color—share
a formal grammar and seem to converse. Their harmony is striking, even
to the viewer who has been to a Rothko room before but is unfamiliar with Raza,
the Indian painter trained in France. The sequence of these paintings also does
the critical work of art history, allowing viewers to trace a chain of artistic
influence from Germany to the United States to India over the twentieth
century—just by looking left to right.
It’s moments like this that show the museum’s promise. And the project’s
long term success has two more factors in its favor. One is location. Gulf
airlines have fashioned the U.A.E. into one of the world’s busiest travel hubs.
In 2016, Dubai
attracted the third highest number of international overnight visitors in the
world, behind only London and Paris (and ahead of New York). It’s not hard to
imagine the museum drawing the kind of traveler who might stop in the Gulf for
a day before hopping on a connecting flight on Emirates or Etihad. A second is
that in 2037, the museum will lose the “Louvre” from its name, and will have a
chance to choose
its own distinct identity.
The Louvre Abu Dhabi is a testament to the U.A.E.’s growing cultural power as a center of globalized capital. It tells a story about our shared humanity, the culture we can all share in a world linked by globalization. It has won tempered praise, too, for its efforts to lift “the universal museum from its bedrock of western privilege.” But it doesn’t tell us about the realities of globalization for many of the people who work in the U.A.E., including those who have worked on the construction of big, statement projects like the museum.
As Human Rights Watch and The Guardian have reported, some workers on construction sites in the U.A.E. enter the country, having already paid recruiting fees that can reach the thousands of dollars, only to find themselves sleeping in close quarters and living in squalid conditions. There is often no recourse to complain, either, as some employers seize the passports of their laborers. “We built the United Arab Emirates,” an Indian laborer, who was deported from the U.A.E. after participating in a strike, told The New Yorker last year. The Emirati government rarely addresses criticisms of its migrant labor policy directly. After the Louvre’s preview day on November 8, however, The National, the U.A.E.’s government-owned newspaper, announced that the museum had won over its toughest critics in the Western media, reprinting selected praise from reviews in The New York Times and The Guardian.
Fiction writers have paid more attention to the country’s inequalities. Temporary People, a recent novel by Deepak Unnikrishnan, dares to expose the inequalities in relationships between Emiratis and the South Asian diaspora that vastly outnumbers them. (Unnikrishnan teaches creative writing at NYU Abu Dhabi, but I could not find his novel in the university bookstore.) The Indian novelist Benyamin’s 2008 book Goat Days is a lightly fictionalized account of a Keralan migrant worker’s enslavement in Saudi Arabia. Kerala has sent almost a million laborers to the Gulf. Though these are not realities you will see at the Louvre Abu Dhabi—at least not in its exhibits—they are undeniably part of the experience of visiting.
A day before I was set to leave, I had an urge to see the museum once more. As I walked in, I saw four South Asian laborers on a boat in one of the museum’s pools. They were scrubbing the space between two of the building’s white walls. The boat was small, though powered by twin, 200-horsepower Yamaha engines attached to the stern. As a small wave of people began to enter the museum, a man stopped to take a photo.