Entertainment | The Atlantic
Gregg Allman, the Sound of Southern Rock
May 27th, 2017, 06:37 PM

Founding member of the Allman Brothers Band, Gregory LeNoir "Gregg" Allman, who with his Hammond B-3 organ, and soft but growling voice helped create a sound that was simultaneously jazz, rock, blues, and parts San Francisco jam band, and that became the defining tone of Southern rock music, died on Saturday. He was 69.

His death was announced on his website, and gave no official cause. Allman struggled much his life with health issues and drug addiction, and the statement on his death said, “During that time, Gregg considered being on the road playing music with his brothers and solo band for his beloved fans, essential medicine for his soul. Playing music lifted him up and kept him going during the toughest of times." It added that he “passed away peacefully at his home in Savannah, Georgia."

Late last year Allman canceled tour dates, saying he’d suffered a throat injury. In March, he canceled the remaining shows on the tour.

Allman and his older brother, Duane, both started the band after years of touring to little success. They were born in Nashville, Tennessee, to a Military father who was killed in 1949 by a hitchhiker he picked up on Christmas day. They lived in Florida, played in high school bands, and moved to Los Angeles in the ‘60s under the band name Hour Glass. They released two albums for Liberty Records, neither to any success. In 1968 everyone but Gregg left the West Coast, and Duane became a studio musician at the famous Muscle Shoals recording studio in Alabama. But Gregg quickly burned out on the music scene in California, and he rejoined his brother. Along with drummer Butch Trucks, percussionist Jai Johanny Johanson, Dickey Betts on guitar, and bassist Berry Oakley, this would become the Allman Brothers Band.

In 1969 the band released a self titled album to little success. The next one, Idlewild South, firmed their tone and set them apart. The album had dark bluesy rhythms guided by the Sabbath-day tones of Allman’s organ. His voice could sound throaty, almost a growl, like on the album’s first track, “Revival,” or more sonorous like on the next song, “Midnight Rider.” That song captures a lot of what the Allman Brothers Band sound would become: rooted in blues, prone to psychedelic interludes, a bit country, and entirely Southern.

It was only after the release of a live performance, called At the Fillmore East, that they became well-known. Their next album, Eat a Peach, would be their most popular. On it were radio hits like “Ain’t Wastin’ Time No More” and “One Way Out,” soft, almost orchestral tracks like “Melissa” and the half-hour quasi-jazz improvisational sounds of “Mountain Jam.” Unfortunately, between the release of those albums, Duane Allman died in a motorcycle accident in Macon, Georgia. A year later, the band’s bassist, Oakley, died the same way. In a 1973 feature in Rolling Stone magazine, written by the then-16-year-old Cameron Crowe, Allman reflected on his start in music, his brother’s musical influence and first motorcycle:

"He quit school, I don't know how many times. Got thrown out a few times too. But he had that motorcycle and drove it until it finally just fell apart. When it did, he quit school. While I was gone, he'd grab my axe and start picking. Pretty soon we had fights over the damn thing, so when it came around to our birthdays—mine was in December and his was in November—we both got one. I got mine a little earlier than my birthday, actually. Matter of fact, I put hands on my first electric guitar November 10th, 1960, at three o'clock that Saturday afternoon. Duane's guitar got into the picture shortly after that."

Gregg gave the Sears guitar to a family friend and it is probably still somewhere in Daytona Beach the way Gregg last saw it; painted flat black with gold strings on it and containing two potted plants.

The Allmans took their electric guitars to led Connors, "a really intense cat who knew how to teach. He's probably still down there. He didn't teach any of that bullshit minute waltz business. I said, 'Man, I want to learn some goddamn Chuck Berry music!" . . . and he taught me."

It was from these roots that later Southern rock bands like Lynyrd Skynyrd and the Marshall Tucker Band got their influence; although, the Allman Brothers Band cut across genre, influencing their peers in the 1970s and leaving their mark on the era. Allman was the face of that sound. He had long blonde hair and thick sideburns that spread his cheeks, and many others would go on to copy that look.

The Allman Brothers Band continued in many variations afterward, always with Allman at the center, either on guitar, vocals, or his Hammond B-3 organ. The band was inducted into the Rock & Roll Hall of Fame in 2006, and received a Grammy Award for lifetime achievement in 2012. Allman also had a successful solo career.  In 2011 he was nominated for a Grammy award for best blues album for Low Country Blues, produced by T Bone Burnett. Allman released his last album earlier this year.

Throughout the decades, much of Allman’s personal life was fraught with pain, either from the early death of his father, failed marriages, drug addiction and recovery. But always there was music. "You've got to consider why anybody wants to become a musician anyway," he said in the ‘73 Rolling Stone feature. "I played for peace of mind."

Lil Yachty and Amazon Charts: The Week in Pop-Culture Writing
May 27th, 2017, 06:37 PM

What Lil Yachty’s Optimism Means
Carrie Battan | The New Yorker
“Yachty’s music is not incidental to his image, but it is only one aspect of his brand. His songs have always been an entry to his meticulously crafted persona, not the other way around. At 19, he is a torchbearer for a class of rappers—and that’s a loose designation—for whom a career represents a tangle of musical innovation and character-crafting strategies.”

The Achievement of Chinua Achebe
Kwame Anthony Appiah | The New York Review of Books
“[Achebe] found a way to represent for a global Anglophone audience the diction of his Igbo homeland, allowing readers of English elsewhere to experience a particular relationship to language and the world in a way that made it seem quite natural—transparent, one might almost say. Achebe enables us to hear the voices of Igboland in a new use of our own language. A measure of his achievement is that Achebe found an African voice in English that is so natural its artifice eludes us.”

How Music and Politics Meet in the Border Community of Texas
Matthew Ismael Ruiz | Pitchfork
“In the Rio Grande Valley, notions of heritage and pride often involve an undercurrent of assimilation that permeates everyday life. For a lot of families, achieving the American Dream means shedding much of the culture they left behind and adopting their new home’s language and ethos of white supremacy.”

What Does Amazon Charts Mean for the Book Industry?
David Barnett | The Guardian
“One advantage Amazon has is that it subdivides literary categories almost to an atomic level, which has both pros and cons. On the one hand, it gives a leg up to authors working in a genre that might not have its own New York Times bestseller category, and who might never trouble the upper reaches of the general fiction sales charts.”

Lolita Fashion
An Nguyen and Jane Mai | The Paris Review
“What is considered beautiful or cute in Lolita fashion is separate from mainstream tastes and trends. It’s about dressing for your own enjoyment, not dressing for others. In other words, it is a self-centered undertaking, an activity of adornment that is not connected to socially productive presentations of self that achieve a goal such as dressing to get a job, dressing to get a boyfriend, dressing to go to school, dressing to fit in, et cetera. Well, I suppose the goal here is to feel happy and beautiful on your own terms.”

What Is Katy Perry Doing?
Ira Madison III | Vulture
“If anything, Perry is most believable when she’s being petty. There’s real heartbreak in ‘Part of Me,’ for sure, but the reason breakup anthems work is because you’re reminding your ex how much better your life is without them. And if the song wasn’t enough to convince [Russell] Brand, she made it the name of her documentary and put his callousness on display for millions to watch.”

Everything You Need to Know About Baywatch in Four Episodes
Andrew Gruttadaro | The Ringer
Baywatch was pure escapist garbage — mostly because of the soap-opera plots and off-the-wall action sequences, but also because it showcased a lifestyle that viewers didn’t have access to. The L.A. in Baywatch was a sun-soaked paradise, and the show basically acted as video production for California’s tourism board. (Not dissimilar to how Master of None serves as wish fulfillment today for those who dream of dining in hip Williamsburg restaurants.)”

The Atlantic’s Week in Culture
May 26th, 2017, 06:37 PM

Don’t Miss

The Real Reasons for Marvel Comics’ WoesAsher Elbein investigates the deeper issues of business and culture plaguing the company’s decline in sales.


Paramount

Film

In the End, We Got the Baywatch We DeservedMegan Garber watches the new movie reboot of the ’90s TV show starring Dwayne Johnson.

Roger Moore, Sultan of Self-DeprecationSophie Gilbert looks back on the career and legacy of the late, suave British actor.

War Machine Isn’t Sure What Kind of Movie It IsDavid Sims reviews the Brad Pitt-starring film based on Michael Hastings’s book about General Stanley McChrystal

Michael Jackson's Final Tragedy, Told GentlySpencer Kornhaber explores the uncomfortable irony of Lifetime’s Searching for Neverland.

Pirates of the Caribbean 5 Is a Sinking VesselChristopher Orr pans the latest installment in the flagging franchise, Dead Men Tell No Tales.


HBO

Television

The Leftovers: Time to DiveSophie Gilbert and Spencer Kornhaber dissect the sixth episode of the HBO show’s third and final season.

ABC’s Dirty Dancing Remake Is a Sad, Strange ProductionSophie Gilbert bemoans the musical special.


Chris Pizzello / AP

Music

The Ideology of an Arian Grande ConcertSpencer Kornhaber highlights how the singer’s career of female self-determination demonstrates the rights of religion, sexuality, and expression that much terrorism seeks to undo.

‘Despacito’ and the Revenge of ReggaetonSpencer Kornhaber talks to a scholar of the genre about the song, whose remix with Justin Bieber is the first Spanish-language U.S. No. 1 hit since “Macarena.”

Celine Dion Saved the Billboard Music AwardsSpencer Kornhaber recaps a memorable performance of “My Heart Will Go On” from the persevering singer.


Eduardo Verdugo / AP

Books

How One Hundred Years of Solitude Became a ClassicAlvaro Santana-Acuña delves into the unlikely success of Gabriel García Márquez’s most famous novel.


Showtime

Television

Twin Peaks Returns to Terrify, Delight, and ConfoundDavid Sims revels in the strange and beguiling comeback of David Lynch’s television classic.

The Bachelorette: Memes for the Right ReasonsMegan Garber recaps the premiere of the show’s 13th season.

What Saturday Night Live’s Departures Mean for Its FutureDavid Sims wonders where the sketch show will go after more cast members left at the end of this season.


Andrew Yates / Reuters

Media

The Horror of an Attack Targeting Young WomenSophie Gilbert tries to make sense of the atrocities at Ariana Grande’s Manchester concert.

Why So Much Is ‘Bonkers’ Right NowMegan Garber traces the history of the word that seems to most aptly sum up the current news cycle.


Scala / Art Resource, NY

Art

‘Instagram’ for 18th Century TouristsKimberly Chrisman-Campbell analyzes highly realistic landscape paintings of the 1700s called vedute, which gave European visitors proof of their trips to exotic destinations.

Why Remix The Birth of a Nation?—Kriston Capps discusses DJ Spooky’s  live multimedia performance, in which he considers the 1915 silent film’s legacy as a document of alternative facts.

Why Didn’t Jared and Ivanka Report Their Art Collection?
May 26th, 2017, 06:37 PM

Two weeks ago, Ivanka Trump caught some alone time in Yayoi Kusama’s Obliteration Room. Staged at the Hirshhorn Museum and Sculpture Garden in Washington, D.C., the piece is a living room en blanc, a white-out room filled with all-white furniture and personal effects, that visitors are invited to cover over with tiny colorful dot stickers. The piece is mesmerizing, an Instagram sensation; a contemporary art lover like Trump wasn’t likely to miss out on the hottest ticket (and snap) in town.

Trump is an art lover, that much is plain. Her own Instagram feed is chock full of images of contemporary art from the Park Avenue condo she and husband Jared Kushner share. Her affection for art—not normally something even her detractors would likely begrudge her—may have worked against her family this week. As reporters at Artnet discovered, Kushner, a senior White House advisor, failed to report the couple’s extensive art collection in required financial disclosures.

How much trouble they’ve landed in is an open question. For federal disclosure purposes, collectibles fall under the category of capital gains. According to the U.S. Office of Government Ethics, senior government officials are only required to disclose collectible items that are “held for investment purposes.” (As an OGE spokesperson explains, Ivanka, who has an unpaid job in the administration, would be covered under Kushner’s disclosure.) If a senior government figure routinely buys and sells valuable artworks, that person must disclose the income as capital gains. And not just for fine art: The same goes for Star Wars figurines.

As Artnet reports, Trump and Kushner have sold just one artwork from their collection. No doubt it met the $1,000 price threshold: The artworks that followers can find in her Instagram feed are all the work of emerging or blue-chip contemporary art talents. She has (perhaps inadvertently) advertised some of her collection’s highlights: a gradient by Alex Israel, paired paintings by Nate Lowman and Dan Colen, an atmosphere by Alex Da Corte. (Much to the chagrin of some of those artists.) Artworks by all of them frequently sell for half a million dollars or more.

One sale hardly meets the frequency limit, however. It may well be the case that, as Ivanka has said in interviews, she and Kushner sincerely love living with art. They aren’t alone among collectors in the administration: U.S. Department of Commerce Secretary Wilbur Ross boasts an art collection worth over $50 million, according to his financial disclosures, which report that he receives no income from it. That is, he’s not an art investor (or at least he wasn’t selling at the time of this disclosure). U.S. Department of Treasury Secretary Steven Mnuchin is, on the other hand: Mnuchin owns a stake in a $14.7 million Willem de Kooning, Untitled III (1978).

Trump and Kushner may not be art investors, as they attest, but they are part of the art world in a way that neither Ross nor Mnuchin is. An art advisor, Alex Marshall, helped the couple to build their collection, Artnet reports; collection consultants are not rare, but they’re not for the casual enthusiast, either. Trump pulls artworks in her collection into her brand: Consider, for example, a Christopher Wool painting that served as a handsome backdrop for an Ivanka Trump Mara cocktail bag in an Instagram post she later deleted.

In the rarefied realms in which Trump and Kushner operate, the line between an objet d’art to be appreciated and a commodity to be moved is a fine one. Serious art collectors have garnered the scrutiny of lawmakers in recent years. In 2015, the Senate Finance Committee demanded documentation from 11 private art collections across the nation that would prove they are, in fact, nonprofit art museums—and not just fancy tax shelters, sometimes located on the same property as the collectors’ homes.  Last year, soon-to-be-former House Oversight chairman Jason Chaffetz launched a government-wide probe to see if anybody, anywhere, was wasting taxpayer money on art.

Possibly the omission was merely an oversight on the part of Kushner and Trump. (Kushner appears to have made a lot of those.) Or maybe the couple, pure in their pursuit of contemporary experience, would never consider the possibility of gains as the value of their carefully curated and leveraged art collection appreciates, and therefore chose to withhold this information from financial disclosures. The question is an open one: Do Trump and Kushner take art too seriously—or transparency not seriously enough? It’s also an easy one.

Poem of the Week: ‘Castles in Spain’ by Amy Lowell
May 26th, 2017, 06:37 PM

Amy Lowell’s legacy, as represented in the pages of The Atlantic and in the broader poetic landscape, is a spare and neglected one. Though she was posthumously awarded the Pulitzer Prize in 1926, she never quite reached the heights of literary acclaim or recognition that her relatives James Russell Lowell and Robert Lowell did. And her poetry hasn’t attracted the same level of praise or popular readership as that of some of her contemporaries, like Ezra Pound, who both influenced and criticized her work, or Robert Frost, who she supported and encouraged in the early years of his career.

But in “Castles in Spain,” published in our August 1918 issue—just months before the end of World War I—she spoke powerfully to the resilience of her own work in the face of war, violence, and the passage of time:

Bombs and bullets cannot menace me,
Who have no substance to be overthrown.
Cathedrals crash to rubbish, but my towers,
Carved in the whirling and enduring brain,
Fade, and persist, and rise again, like flowers.

Many of Lowell’s towers endure, beautiful and evocative, in our archives, a testament to that resilience. You can find some of them—including her very first published workhere.

Pirates of the Caribbean 5
Is a Sinking Vessel

May 26th, 2017, 06:37 PM

The subtitle of the new Pirates of the Caribbean movie is “Dead Men Tell No Tales.” The moral of the movie, alas, is that the same cannot be said of dead franchises.

The first Pirates film was an unexpected success: wildly overlong and over-plotted yet kept afloat by a wicked, bravura, and utterly original performance by Johnny Depp as Captain Jack Sparrow, a swishily swaggering mélange of rum, eyeliner, and impudence. As is customary, the sequel was a pale imitation, and the third installment of the presumed trilogy went a bit trippy and meta.

Which would all have been well and good enough. But money makes people do silly things. The half-hearted and wildly unnecessary fourth movie, Pirates of the Caribbean: On Stranger Tides was one such thing. It will surprise no one to learn that the latest installment in the franchise is another. At least On Stranger Tides had the decency to be a standalone movie; with Dead Men Tell No Tales, there is talk of that most pernicious of cinematic gambits, the “soft reboot.”

Captain Jack returns, of course, although the character’s originality has gradually evolved into very nearly its opposite, a species of tired and vaguely embarrassing drag act. Given that his co-stars Keira Knightley and Orlando Bloom abandoned the franchise after the initial trilogy, Jack is supplied with a new pair of pretty, mutually attracted protagonists. Brenton Thwaites plays Henry Turner, a young adventurer who is the son of Bloom’s and Knightley’s characters. (No, the franchise hasn’t actually been around that long. Yes, it feels as though it’s been around even longer.) And Kaya Scodelario portrays Carina Smyth, an astrologer and horologist—sadly, there are quite a few jokes playing on that first syllable; more sadly still, they’re above average for the film—who is eventually revealed to be the daughter of ... well, I’d best leave that to “eventually.”

Javier Bardem shows up as the villainous undead pirate hunter Armando Salazar, inheriting the precise plot functions performed in previous installments by Geoffrey Rush’s Barbossa, Bill Nighy’s Davey Jones (who at least had the decency to hide himself under a faceful of tentacles), and Ian McShane’s Blackbeard. And series regular Rush is back again, his pirate Barbossa having been un-undead for several films now.

There’s a small role for Bloom, whose current career seems to consist largely of retconning characters (Legolas, Will Turner) from the period when some mistakenly thought he was a plausible leading man, into projects (The Hobbit, this latest Pirates entry) released at a point when we all know he’s not. There’s even a blink-and-you’ll-miss-it glimpse of Knightley, who clearly has better things to do than waste time in this franchise. In place of a previous cameo by Keith Richards, who was a principal inspiration for Jack Sparrow, we have a cameo by Paul McCartney, who was not.

As with the roles, so too with the plot. Per the norm, there is a mystic artifact to be acquired, the Trident of Poseidon, which has the power to break all of the sea-curses accumulated over the previous four films. (How’s that for a reboot?) There are plots and betrayals, piratical zombies and sea monsters and a ghost ship, and much bouncing around from vessel to vessel.

Even when the movie introduces new elements to the franchise, they are the stalest chestnuts in the cupboard. Jack Sparrow is given an entirely gratuitous origin story, so that he can be cinematically de-aged à la Robert Downey Jr. in Captain America: Civil War or Kurt Russell in Guardians of the Galaxy Vol. 2. In fact, if you genuinely desire subplots about paternities revealed or a noble sacrifice by a secondary character in the final reel, go see (or re-see) Guardians 2, which does both better.

Depp slurs and sways his way through the film as usual, but reports of his erratic behavior on set cast the performance in a somewhat different light this time around. When, at one point, he introduces himself with boozy extravagance as “the great Captain Jack Sparrow,” his audience’s palpable disappointment feels as though it accrues as much to Depp himself as to the character he is playing. Meanwhile, newcomers Thwaites and Scodelario possess a small fraction of the shimmer supplied by Bloom and Knightley before them.

It all adds up to a dreary, dispiriting voyage. During the finale, as Bardem’s Salazar makes a final, mortal approach, he bellows, “This is where the tale ends!” Please, please, please, let it be so.

Michael Jackson's Final Tragedy, Told Gently
May 26th, 2017, 06:37 PM

With an event as seismic and salable as Michael Jackson’s death, the eight-year anniversary also means it’s time to rev-up for the 10th-anniversary—which is to say that the next few years will see a stream of new and unauthorized Jackson-related content. The controversial Sky Arts TV episode in which Joseph Fiennes was set to play one of the most famous black people to ever live, canceled in January, was just the start of what is sure to be a fraught era. In the pipeline: a Netflix animated movie about Jackson’s pet chimpanzee and a film about a Muslim cleric obsessed with Jackson. This Monday brings Lifetime’s Searching for Neverland, a scripted biopic that portrays Jackson’s final years while demonstrating the tensions that necessarily surround all such projects.

Based on the memoir written by Jackson’s bodyguards Bill Whitfield and Javon Beard, Searching for Neverland stars Navi, a Jackson impersonator so dead-on that the King of Pop allegedly used him as a body double from time to time. As is typical for Lifetime biopics—the network has portrayed Britney Spears, Whitney Houston, and the cast of Saved by the Bell in recent years—the subject (or in this case, his estate) has not sanctioned the project. As is also typical, the movie (directed by the TV veteran Dianne Houston) is a visually workmanlike, episodically structured look at the perils of fame that will offend few but enrapture only devoted fans.

The story begins in 2006, with Jackson and his three children returning to the U.S. from Bahrain, where they’d been staying following his acquittal from child-molestation charges. The move to Las Vegas, we’re told in title cards, was in hopes of landing a casino-concert residency for him. Before we meet Jackson we meet Whitfield, a single dad in the private-security industry who is determined that future gigs won’t take away from time spent with his daughter. This goal will not be met. The shrouded skinny man he escorts from the airport turns out to be Jackson, who then hires him on the spot as permanent bodyguard—one of only two full-time staff members, the other being the nanny.

Jackson’s world as portrayed here is intensely cloistered. In flash-forwards to depositions of Whitfield and his eventual back-up Beard after Jackson’s death, investigators ask the bodyguards what people were regularly in the star’s life. The answer: almost no one other than his staff, his mom, his kids, and the fans assembled outside the gate. Other Jackson family members materialize only occasionally in the film, demanding money or unwanted audiences with the artist. Friends, we’re told, were sufficiently scared off by Jackson’s scandals that he couldn’t get anyone to show up for his daughter’s birthday. And public excursions were nearly impossible: Jackson’s use of code names and masks when leaving the house come to seem less like eccentricities than necessities, given the stressful scenes at the mall and on the street of fans and paparazzi swarming with violent intensity.

The title of the film refers to Jackson’s belief that Neverland, his famed theme-park-like ranch, had been “tainted by evil” thanks to his legal saga. “We can’t ever go back to Neverland,” he’s seen telling his kids, who repeatedly beg that they all return “home.” Jackson wanted to find them a new and better home, and eventually viewers see him fall in love with a sprawling $55 million Las Vegas property that he imagines turning into “Wonderland.” The dream wasn’t to be: Though the pop star was rich on paper, he had little control over his finances, according to Whitfield and Beard. Jackson’s manager Raymone Bain, and later his new confidante Michael Amir Williams, are presented as self-interested manipulators, ignoring Jackson’s request that his bodyguards be paid promptly and hassling him to sign contracts for strenuous gigs that he doesn’t want to do. It gets to the point where the Jacksons are crashing in the modest suburban New Jersey house of a friend; Jackson, Whitfield says, was more or less homeless.

The exact contours of the events leading up to Jackson’s death are left hazy here: Whitfield and Beard say they were marginalized by Williams in the final months, and the infamous Dr. Conrad Murray, eventually charged with involuntary manslaughter in Jackson’s death, appears only in one brief scene. The stated aim of the movie isn’t to portray Jackson’s demise but rather to humanize him as a father and man, which it does dutifully in scenes of familial movie nights, squirt-gun fights, and birthday parties. Navi does look the part, and he plays Jackson with the otherworldly earnestness that fans admired in him (though he sometimes sounds like he’s cycling through accents). Whitfield and Beard stick around even as they stop being paid, and though their frustrations mount, they continue to profess loyalty to Jackson. The closest the film gets to criticism of the star himself is a lengthy and striking scene of him, clearly addled by substances, going on an extravagant shopping spree at FAO Schwarz even as his guards are owed backpay.

Jackson’s estate has issued a statement distancing itself from the movie, but all involved have insisted the point of the film is to bolster his legacy. “I didn’t do this project from a business point of view,” Navi has said. “I do this project from my heart, from a Michael Jackson fan’s point of view.” The actor Chad Coleman, who plays Bill Whitfield: “There’s nothing salacious here. … this project is something that I believe will allow his fans to be able to properly grieve the man.” They seem earnest in their avowals to do right by Jackson, and Searching for Neverland is indeed a gentle portrayal, propping up Jackson’s image as a devoted family man ill-equipped to survive the demands of fame. Nevertheless, the irony of this project and others like it is unmissable. Jackson’s life was commodity, whether he wanted it to always be or not. Searching for Neverland mourns the exploitation of him in his final years; its very existence also, necessarily, exploits his final years.

War Machine Isn’t Sure What Kind of Movie It Is
May 25th, 2017, 06:37 PM

Michael Hastings’s 2010 Rolling Stone article “The Runaway General,” a chronicle of now-retired General Stanley McChrystal’s brief tenure as the commander of operations in Afghanistan, remains a wild read today. A powerful piece of journalism that cost McChrystal his job, the story offered a look inside the behavior of the military elite and was stunning simply because of the level of access Hastings had into their hard-partying lifestyle. Hastings, who died in 2013, turned his article into a book, The Operators, which examined McChrystal’s rock-star reputation and how it disintegrated as he tried to win a supposedly unwinnable war.

This may all sound like fertile territory for a satire—an acidic, no-holds-barred account of America’s troubled endeavors in the Middle East—but the director David Michôd’s War Machine isn’t quite sure how cynical it wants to be. A somewhat fictionalized account of Hastings’s book (the main characters’ names are changed, though the film keeps the characters of then-Secretary of State Hillary Clinton and President Obama), the movie is going for a boots-on-the-ground look at the mistakes and horrors of the Afghanistan War. Except, that is, when it’s trying to be a rollicking comedy anchored by a broad Brad Pitt performance as Glen McMahon, a McChrystal stand-in.

War Machine, which rolls out on Netflix (and in a limited theatrical release) on Friday, is caught between two poles, looking to humanize and contextualize McMahon’s rise and fall while clearly rooting for him to fail from the outset. The result is a bizarre genre—the war dramedy, one could call it—that crosses the profane, internecine, Veep-like office politics of the military’s top brass with more brutal, soldier’s-eye-view battle footage. War Machine is a failure, but could perhaps have been a great film if it had tried a little harder to pick a tone.

The movie begins with voice-over narration from Sean Cullen (Scoot McNairy), the roving journalist who stands in for Hastings. Sean explains the regimented outlook of General McMahon, appointed in 2009 to win the war in Afghanistan. He’s beloved by the men who have served directly under him, and accompanied by a tight inner circle of soldiers who attend to his every whim. Though a charismatic leader who believes in the importance of outreach to local Afghan leaders and soldiers, McMahon seems able only to speak in homilies and circular dialogue.

His plan—to win the hearts and minds of Afghan citizens through sustained promotion of democratic values—is at odds with the chaos that surrounds his troops, an occupying force that is increasingly despised by Afghans sick of war. At least, that’s what Sean tells us in voice-over. War Machine’s script, also written by Michôd, is strangely didactic, especially considering Sean doesn’t enter the action until the last act of the movie, and his presence is pretty minimal even at that point.

Sean serves as a vague voice of conscience, critiquing McMahon’s plan of victory before the movie even begins. Sean’s blanket dismissal makes the rest of the film (which runs for a dreary two hours) feel pointless, since the entire plot revolves around McMahon’s doomed attempts to win. His plan, such as it is, largely consists of meeting with various people: then-Afghan leader Hamid Karzai (Ben Kingsley), who’s uninterested in leaving the state palace; local leaders who tell him the war cannot be won; and Secretary of State Clinton, who warns him against asking for more troops.

Unbowed, McMahon does exactly that, and tries to maneuver his way into securing a huge commitment of resources. The way Pitt plays him, constantly squinting into the middle distance and holding his hands in a claw-like fashion, McMahon might as well be a character out of Catch-22, a foolish warrior-turned-bureaucrat with delusions of victory in the face of an obvious quagmire. But Michôd is a much more restrained director (his previous works include several documentaries and the grim Australian crime epics Animal Kingdom and The Rover). The filmmaker seems uncomfortable with Pitt’s antics, and the goofy ensemble of consultants around him (including a hopped-up Topher Grace and a particularly aggressive Anthony Michael Hall).

War Machine kicks into a higher gear when following the travails of a depleted, exhausted company of soldiers in the bloody Helmand province, which McMahon seeks to regain control of. Lakeith Stanfield and Will Poulter are standout performers in these segments, including a harrowing sequence where they accidentally shell a civilian household. But such scenes feel worlds away from the comic hijinks of McMahon needling the White House for troops and going on a raucous diplomatic tour of European bases.

That tour is what eventually does McMahon in, as it did McChrystal—it’s where his men, drinking and carousing in front of Sean (furiously scribbling notes), loudly criticize the Obama administration and brag of their own military prowess. And it’s a dismal foregone conclusion set up by the film’s opening minutes, one that takes far too long to arrive, with far too little learned in the meantime. War is hell—or is it just bleak comedy? Michôd doesn’t seem to know.

The Real Reasons for Marvel Comics’ Woes
May 25th, 2017, 06:37 PM

Marvel Comics has been having a rough time lately. Readers and critics met last year’s Civil War 2—a blockbuster crossover event (and a spiritual tie-in to the year’s big Marvel movie)—with disinterest and scorn. Two years of plummeting print comics sales culminated in a February during which only one ongoing superhero title managed to sell more than 50,000 copies.* Three crossover events designed to pump up excitement came and went with little fanfare, while the lead-up to 2017’s blockbuster crossover Secret Empire—where a fascist Captain America subverts and conquers the United States—sparked such a negative response that the company later put out a statement imploring readers to buy the whole thing before judging it. On March 30, a battered Marvel decided to try and get to the bottom of the problem with a retailer summit—and promptly stuck its foot in its mouth.

“What we heard was that people didn’t want any more diversity,” David Gabriel, the company’s senior vice president of sales and marketing, told an interviewer at the summit. “They didn’t want female characters out there. That’s what we heard, whether we believe that or not ... We saw the sales of any character that was diverse, any character that was new, our female characters, anything that was not a core Marvel character, people were turning their nose up against.”

Despite an attempt by Gabriel to walk back the quote, the remarks kicked up another firestorm of criticism by those concerned Marvel was shifting the blame for poor sales on to “diverse” characters—particularly since, contrary to the company’s claims, sales data showed that minority-led books were actually doing relatively well compared to books starring white male characters.

At first glance, the dustup was an industry cliche: The relative lack of diverse creators—and charactershas been a bone of contention for years at both DC and Marvel.  But in the aftermath of Marvel’s rocky first quarter—and with the controversial Secret Empire now in full swing—it’s clear the publisher’s problems run more deeply than an ill-timed storyline or public-relations fumbles. Audiences are drifting away. New fans feel ignored. Despite movies that dominate the cultural landscape and regularly clear millions of dollars, the entire edifice of corporate superhero comics represented by both publishers has been quietly crumbling for years, partially due to Marvel’s own business practices. Marvel can’t seem to actually sell comics, diverse or not—and the company only has itself to blame.

* * *

The comics industry these days is much diminished from its heyday. Beginning in the 1970s, corporate comics publishers moved away from selling through newsstands and grocery stores, turning instead to “the direct market,” which allowed buyers to purchase books straight from the publishers. This change both fueled the growth of specialty-comics shops and led to the corporate monopoly held by Diamond Comics Distributors, the middleman between retailers and publishers. In the 1990s, an issue of the popular The Amazing Spider-Man that sold around 70,000 would be considered a failure. The collapse of the comics speculation bubble in the mid 1990s—a bubble partially fueled by Marvel’s own encouragement of the speculator boom and flooding of the market—dealt a blow to the market it never quite recovered from. These days, what counts as a successful superhero book is anything that can sell a regular 40-60,000 copies. Most sell quite a bit less.

As it happens, speculation is an inherent feature of the direct market. Unlike in traditional publishing, comics sold to retailers through the direct market can’t be returned for a refund. So retailers have to preorder comics months in advance, knowing that if they order too many, they’ll be stuck with the overstock. Marvel and DC largely judge sales based on these preorders, and a low number of initial preorders can lead a publisher to cancel a series before a customer ever gets a chance to buy the first issue. There’s an incentive for publishers to push out as much product as they think the market will bear, and a narrow window for feedback. Due to the preorder system, books that might reach out to new audiences—such as those starring minority characters—are at an immense disadvantage right out of the gate. As a result, books like David F. Walker and Ramon Villalobos’s Nighthawk or Kate Leth and Brittney Williams’s Patsy Walker, AKA Hellcat!, and even spinoffs of popular series like Ta-Nehisi Coates’s Black Panther, like rarely last long before being canceled.  

The uncertainties of the direct market are something all comics companies have to navigate, and sales gimmicks like collectible “variant” covers and special, higher-priced issues are common. Big publishers like DC and Image enthusiastically take part in these gimmicks. But Marvel pursues them at a level that puts other publishers to shame. Their primary trick is the consistent (and damaging) strategy of relaunching books with #1 issues or titles.

In 2013, for example, the writer Al Ewing began working on Mighty Avengers, focusing on a team of community-oriented superheroes led by Luke Cage and Jessica Jones. Fourteen issues later, Marvel relaunched it with a new #1 as Captain America and the Mighty Avengers, then canceled it nine issues in. In 2015, Ewing began writing both New Avengers and Ultimates, which followed characters from Mighty Avengers. Marvel relaunched both a year later—again with new #1s—as Ultimates 2 and USAvengers. Sound complicated? It gets worse: The 2013 Mighty Avengers was the third series to use the title; the 2015 Ultimates was the seventh. Both are unrelated to previous series. Such a publishing scheme is convoluted even for a committed fan; for a new reader, it’s nearly impenetrable.

Marvel’s argument for this approach has typically been that new #1 issues both boost sales and pull in new readers. It’s true that a #1 issue tends to sell quite well on the direct market—but since retailers are ordering inflated amounts sight unseen, it’s an artificial bump at best, and sales drop sharply afterward. In fact, according to an exhaustive and entertaining analysis by the writer and game designer Colin Spacetwinks, this constant churn badly erodes the readership. G. Willow Wilson’s excellent Ms. Marvel, a series starring a young Muslim heroine from Jersey City, debuted at a circulation of roughly 50,000 before holding steady at 32,000; the relaunched version a year later began at around 79,000 before dropping sharply to a current circulation of around 20,000. “Marvel’s constant relaunching ... has been harmful to direct market sales overall,” Spacetwinks writes, “as well as harmful to building new, long-term readers.” With every relaunch, it becomes easier to jump off a title.

Another source of instability lies in the way corporate superhero comics have largely moved away from long tenures by creative teams. Artists are now regularly swapped around on titles to meet increased production demands, which devalues their work in the eyes of fans and rarely lets a title build a consistent identity. (Imagine a television show using a new cast and crew every few episodes for a sense of how disruptive this is.) Marvel and DC are both guilty of this, but neither seems to have grasped how damaging it actually is to the books themselves—and Marvel has pursued the practice for longer.

Marvel’s editor-in-chief Axel Alonso told an interviewer at March’s retail summit that he didn’t know if artists “[moved] the needle” anymore when it came to sales. The fact that Marvel has trained audiences to regard those artists as disposable doesn’t seem to have crossed his mind; nor does the possibility that buyers—like a few prospective comics fans I know—might be turned off by constantly rotating art teams.

Marvel’s instinct with readers who do stick around, meanwhile, has been to squeeze them for all they’re worth. Marvel comics tend to be priced at around $3.99 to $4.99 for 22 pages, and many series ship new issues twice a month. (Digital editions are usually priced about the same.) Marvel publishes around 75 ongoing series, along with miniseries and single-issue specials. (DC, for comparison, made a concerted effort for the last few years to publish around 50 ongoing series and also had trouble making them stick.) April alone saw five “Avengers”-titled books. Then there are the crossover events—four so far this year—which interrupt the storylines of ongoing series and require readers to buy multiple other books to understand what’s going on. Reading Marvel, in other words, gets very pricey, very quickly, and the resulting flood of product exhausts retailers and ends up driving customers away.

* * *

Marvel’s marketing and PR must bear a hefty share of the blame as well. The company habitually places the onus for minority books’ survival on the readership, instead of promoting their product effectively. Tom Brevoort, the executive editor at Marvel, publicly urged readers to buy issues of the novelist Chelsea Cain’s canceled (and very witty) Mockingbird after the author was subjected to coordinated sexist harassment.

The problem, however, is that the decision to cancel Mockingbird was necessarily made months in advance, due to preorder sales to retailers on the direct market. The book itself launched with only a few announcements on comics fan sites; no real attempt to reach out to a new audience was made. Marvel’s unexpected success stories, like Kelly Sue DeConnick’s Captain Marvel, are largely built on the tireless efforts of the creators themselves. (In Deconnick’s case, she paid for postcards, dog tags, and fliers for fan engagement out of her own pocket, for a character she didn’t own or have a real expectation of royalties from.)

It might be argued that Marvel has to be judicious about what books it spends money to promote, and that good word of mouth can make up the difference for free. Again, the dropping sales numbers for Marvel’s books suggests this isn’t the case. But even if it were, the publisher’s word of mouth lately has been abysmal. The past decade has been a parade of singularly embarrassing behavior by Marvel writers and editors in public. The former editor Stephen Wacker has a reputation for picking fights with fans; so does the Spider-Man writer Dan Slott. The writer Peter David went on a bizarre anti-Romani rant at convention (he later apologized); the writer Mark Waid recently mused about punching a critic in the face before abandoning Twitter. The writer of Secret Empire, Nick Spencer, has managed to become a swirl of social media sturm all by himself, partially for his fascist Captain America storyline and partially for his tone-deaf handling of race and general unwillingness to deal with criticism.

What’s frustrating about all of this is that Marvel has recently demonstrated an interest in publishing good, socially conscious books. Ewing’s Ultimates and Avengers work is consistently charming and witty; Ryan North and Erica Henderson’s Unbeatable Squirrel Girl is an unalloyed delight; G. Willow Wilson and Adrian Alphona’s Ms. Marvel deserves all the praise it has gotten and more. Yet the company’s strategy has largely been to launch books into a flooded market—one, again, that they themselves have flooded—and let them sink or swim. Books like The Amazing Spider-Man have enough name recognition that they’re always going to sell with minimal marketing. Books led by newer, more diverse characters, no matter how good they are, do not have that luxury. Marvel may publish good books, but without full commitment from the company, many of those books are being set up for failure—and allowing Marvel’s audience dwindle.

* * *

For all of the cultural preeminence of Spider-Man or The Avengers, the superhero-comics industry remains a sideshow. The media conglomerates that own DC and Marvel use both publishers largely as intellectual-property farms, capitalizing on and adapting creators’ work for movies, television shows, licensing, and merchandise. That’s where the money is. Disney has very little incentive to invest in the future of the comic-book industry, or to attempt to help Marvel Comics reach new audiences, when they’re making millions on the latest Marvel film. If the publisher wants to pull itself out of this slump, it’ll require a fundamental shift in the way the company thinks about selling comics. The trick is sustainability, not short-term profits, and that requires not just staunching the drain in customers but actively attracting new ones. That involves figuring out what prospective readers want, not what they will simply tolerate.

A potential example lies in popular series from Image Comics like Robert Kirkman and Charlie Adlard’s The Walking Dead and Brian K. Vaughn and Fiona Staples’s Saga. The former sells fairly steadily at around 75,000 units through the direct market, and the latter sells around 50,000.** Collected editions are regulars on graphic-novel bestseller lists. While the series are long-running, they offer a consistent and contained experience, with a writer and artist working in sync and constant fan engagement. These sorts of books aren’t constantly relaunched, and they aren’t burdened with multiple spinoffs. They’re easy to follow in collected editions. They don’t offer the dizzying direct market highs of a new #1, but after years, they’ve maintained a dependable and fervent following.

Marvel and DC might emulate this model by cutting back on the number of series they publish and the frequency with which they ship them. Both companies could be more judicious in pairing artists and writers for sustained periods, promoting series outside of the usual channels, and warmly engaging with fans. Instead of simply telling people to buy their books, they could instruct new audiences how. And they could listen to what new audiences say they want: diversity not just in racial, religious, or sexual terms, but also in terms of the types of stories told: Is there really any more harm in publishing a comic where Captain America has a romantic cup of coffee with his boyfriend Bucky than one where he’s a Nazi?

There are signs that Marvel is beginning to take reader and retailer concerns partially seriously: The company has promised that its new “Legacy” initiative will keep crossovers to a minimum, will have fewer incessant relaunches, and will maintain a focus on diverse characters. The question is whether the company will be able to resist going back to its old habits. After all, there are only so many times you can relaunch yourself before people wonder if what you have is really worth buying.


* This article has been updated to clarify that just one ongoing Marvel superhero title sold more than 50,000 copies in February 2017.
** This article originally misstated that The Walking Dead series sells at around 50,000 units. We regret the error.

In the End, We Got the Baywatch We Deserved
May 25th, 2017, 06:37 PM

Baywatch, the internationally syndicated television show of the 1990s, is remembered today primarily for its synthetic body parts and secondarily for its massive viewership (the show boasted a weekly audience, at its height, of 1.1 billion people, spread across 142 countries). What is generally less well recalled, however, at least in the American cultural memory, is the show’s pioneering of a category of entertainment that has since become a favorite of Hollywood: the show that is so bad it’s good profoundly awesome. Baywatch was so poorly acted that its oily thespians could be seen to be inventing, frame by frame, a novel strain of camp. Its stories were so patently absurd that they occasionally threatened to venture into full-on surrealism. There were, in this show, so many bouncing bodies, so many robotically delivered lines, so many animatronic sharks.

The best thing that can be said about Baywatch, the director Seth Gordon’s cinematic reboot of the TV series, is that it understands, and indeed fully embraces, the show’s awful-awesome aesthetic. The second-best thing that can be said about the feature film, though, is that it does what reboots will always do, purposely or not: to serve as a measure of cultural change. Reboots may be cynical cash grabs, exploiting audiences’ nostalgia for the past, or at least for a time when their bodies better resembled those of, say, David Hasselhoff and Yasmine Bleeth; reboots are also, however, implicit markers of the progress that has been made in the years between the original and the update. The new Baywatch is, say what else you will about it—and there is definitely much to be said, else-wise—a heady mixture of both of those things.

The story goes like this: Mitch Buchanan (Dwayne Johnson, radiating his typical, effortless charisma) is the head of Baywatch, an elite crew of lifeguards who—inexplicably but also repeatedly, just as in the TV show—spend much of their time fighting beach-related crimes. As Baywatch opens, Buchanan and his team are looking for new members. Throngs of would-be watchers of the bay come out for the chance to join Mitch, C.J. (Kelly Rohrbach), and Stephanie (Ilfenesh Hadera) in the Spandex-clad squad—among them Summer (Alexandra Daddario), an athlete with a no-nonsense attitude, Ronnie (Jon Bass), a nerd with So Much Heart, and, finally, Matt Brody (Zac Efron), a retired Olympic swimmer in the Ryan Lochte mold, who was a star in individual events but—hold on to your metaphors—let the U.S. team down during the team-relay races. There’s also a drug-running villain (Priyanka Chopra), and a plot twist that—

—but, wait, you didn’t come for the plot. None of us, the writers of this film included, came for the plot. Suffice it to say that Baywatch, as you’d probably expect, features action and adventure and creatively utilized jet skis and a healthy dose of what the Instagrammers call #fitspo. Suffice it to say, too, that there’s a lot of delight to be had in the frothy union of the Bay and the Watch, much of it coming from Johnson, who carries the whole of this movie on his epically chiseled lats.

There’s more than The Rock to admire here, though: Baywatch, compared to its 1990s source material, features a decent amount of diversity in its casting—and it both takes the diversity for granted and also, throughout the film, makes it the subject of light-hearted humor. (“Are you Batman?” a boy asks Mitch, wonderingly, after the lifeguard-hero has rescued him from a watery demise. “Yeah,” Mitch replies, “just bigger and browner.”) Baywatch also features beautiful cinematography—never in the long history of the franchise have cameras so lovingly captured lifeguards’ dives into a crystalline sea—and action scenes that manage to be violent and whimsical at the same time, and a varied and energetic soundtrack, populated by the likes of Pras and Sean Paul and Lionel Richie.

And, of course, this being a reboot: There’s also, inevitably, the winkiness—the playful callbacks to the past, the cheerful fan service, the knowing nods to the terrible/wonderful TV show that serves as the source material for the newer concoction. In Baywatch’s case, the knowingness includes, but is by no means limited to: obligatory shots of the cast running in slow motion, punctuated with characters making jokes about running in slo-mo; characters murmuring about how the plots they are living would make “an entertaining but far-fetched TV show”; and one Baywatch-er marveling of another, capturing the ultimate paradox of the original show’s distinctive aesthetic: “She’s wet ... but not too wet.”

There are also, it must be said, lots of good jokes. If you see Baywatch, you will very probably find yourself laughing. You might even find yourself, multiple times, caught in full, cathartic guffaw. Mitch, throughout the movie, gives ad-hoc new nicknames to Matt, most of them (“One Direction,” “Malibu Ken,” “McDreamy,” “Bieber,” “Baby Groot,” and, yep, “High School Musical”) effectively mocking the character’s uncanny resemblance to Zac Efron. There’s a sun-bleached surfer whose unintelligible English is translated for the audience via subtitles. There’s lowbrow humor and pratfall humor and a fantastic set piece that I will not spoil but that I will simply say involves The Rock and a Sprint store. There’s also, in all this, an infectious sense of fun: The actors here seem to be having the time of their lives as they run around on the beach in a manner that is cheeky in every sense of the word. Their enjoyment leaps off the screen, right into mouths that are gapingly mid-guffaw.

So, then, why does Baywatch currently have a 15 percent approval rating on Rotten Tomatoes? Why have entire news articles been devoted to the film’s chilly reception among critics? Why did The Daily Beast’s review of the movie scold, “How Dare the Baywatch Movie Be This Bad”?

That would probably have something to do with the other thing mashed into a film that so often tries to have it not just both ways, but all of them: Baywatch relies, ridiculously often, on jokes that put the gag in “gag.” In this R-rated movie that seems determined to make the most of the rating, male characters are accused of possessing “manginas.” Much fun is made of taints and breasts. Even more fun is made, during one scene, of Zac Efron in drag. A hefty percentage of the jokes here, too, take it for granted that the most subversively hilarious thing that can happen in a movie is to show two men kissing and/or one man touching the scrotum of another. Such “edgy” jokes, here, come as a subset of the largest category of gag in this gagtastic film: There are so many penis jokes in this movie. (No, but really: I cannot emphasize enough how many jokes about penises Baywatch managed to pack into 116 minutes of run time.)

So: modern and regressive! Savvy and silly! So knowing, and so deeply ignorant! What it all amounts to, for the viewer, is a rough approximation of being caught in a sparklingly CGI-ed riptide: Just when you’re close to shore, you get sucked out into the depths (where inevitably you will encounter, swimming in the dark water, yet another penis joke). Baywatch casts a woman of color (Hadera, of Chi-Raq and the most recent season of Master of None) as one of its central model-lifeguard-detectives. It then gives her next to nothing to do. The movie celebrates Summer’s athletic prowess at the triathlon-esque auditions Mitch and crew put on to find the next members of the Baywatch squad; it then finds her engaged in an extended conversation with Matt Brody about, yep, her breasts. The film gives Chopra a great, feminist line—“If I were a man, you’d call me driven,” the villain remarks as an enemy questions her villainy—but also features a woman Mitch has rescued fawning to her beefcake of a savior, “Oh, que guapo! If you want me, you can have me!”

So Baywatch is both self-aware and clueless; it is a product of both the 21st century and the 20th; it is populated with both penis jokes that are amusing and penis jokes that, like their subjects, end up lodged, awkwardly and painfully, in the slats of beach furniture. There’s a little bit of Farrelly here, and a little bit of Foucault, and the end result is often delightful but also, just below the glinty surface, deeply confused about what it is and who it is for.

What I am trying to say is that, to the extent that reboots are measures of cultural progress, Baywatch is a movie that is, yes, also a metaphor—a muscle-bound and liberally spray-tanned status update on behalf of all of America. We have come so far, since the ‘90s … but also, Baywatch reminds us, repeatedly, not far enough. Here is a movie that, like the place that created it, is decidedly ambivalent about progress itself. Here is a movie that knows it could be better. Here is a movie that sometimes tries to be. And here is a movie that, disappointingly often, fails. Baywatch has global aspirations, certainly—that is another way it resembles the TV show—and yet to watch it is also to watch the America of 2017 reflected back to itself via taut actors who spend much of the film clad in swimwear of red and blue. As soon as the movie moves forward, it moves back again. Baywatch giveth; it taketh away; it maketh just one more penis joke; and then, giggling, it runs off along the beach, sun-drenched and Spandexed, caught in slow motion—moving, always, but in another way not moving much at all.

‘Instagram’ for 18th-Century Tourists
May 25th, 2017, 06:37 PM

As a mantra, “pics or it didn’t happen” carries a clear whiff of internet-age modernity. But in many ways, the sentiment behind the phrase precedes smartphones, Snapchat, and selfie sticks by some 275 years. Eyewitness Views: Making History in 18th-Century Europe, a new exhibition at the J. Paul Getty Museum in Los Angeles, looks at the Enlightenment-era phenomenon of vedute, or view paintings: astonishingly detailed cityscapes of Venice, Rome, Paris, and other tourist hotspots. These canvases were highly collectible luxury souvenirs, pictorial portals that would later transport the visitor (and friends back home) to that faraway place and moment. Their strict perspective lent itself to formal gardens, neoclassical arcades, and canals lined with palazzos.

But vedute were more than glorified postcards, the Getty curator Peter Björn Kerber argues in his sumptuous exhibition catalog. They also served as proof that one had personally encountered the cultural and architectural marvels of Western civilization—a kind of proto-Instagram. Many vedute included portraits of the tourist or diplomat who had commissioned them. Others depicted newsworthy events the visitor had witnessed firsthand, from royal weddings to volcanic eruptions. Though dwarfed by their surroundings, the figures in these paintings are identifiable by details of dress or by their positioning, slightly larger than life or perhaps illuminated by a strategically placed shaft of light.

Artists took pains to give vedute the illusion of authenticity, both in their photorealism and in their knack for putting the audience in the scene. The observer is not just a fly on the wall, but also seemingly an active participant in history in the making. In one image, Kerber points out, the Venetian painter Canaletto adopts a perspective that could only have been seen from a boat in the middle of a teeming canal, placing the viewer literally in the middle of the action. The Roman artist Giovanni Paolo Panini often included a self-portrait in his vedute, a meta touch that proved his works’ accuracy.

Visual cues give a sense of time as well as place: setting suns, rippling flags, wisps of smoke from a just-fired cannon salute lingering in the air. In Panini’s 1747 depiction of a packed performance of a cantata in the Teatro Argentina, Kerber points out, it is even possible to identify the specific musical passage being played from the positioning of the musicians: the drummers poised for a downbeat, trumpeters at the ready.

Giovanni Paolo Panini’s The Musical Performance in the Teatro Argentina in Honor of the Marriage of the Dauphin, 1747 (RMN-Grand Palais / Art Resource, NY)

Foreign ambassadors to Italy were the first to recognize and exploit the image-making potential of the new vedute style. By tradition, they made their ceremonial entrances to their host cities in processions of coaches—or, in Venice, gondolas—specially purchased or constructed for the occasion, and so elaborate that the ceremonial entrance might take place a full year or more after the ambassador’s actual arrival, Kerber notes. Given these careful and costly preparations, Kerber writes, it’s understandable that diplomats wanted to capture these career-defining moments on canvas, simultaneously burnishing their reputations at home and abroad.

But the genre also responded to the no-expense-spared festivals and pageants that characterized 18th-century urban life, transforming these ephemeral entertainments into lasting art. Just as digital photography and social media have challenged many people to step up their game when it comes to food presentation, contouring, and interior decoration, so did the popularity of vedute perpetuate a vicious cycle of competitive celebrating.

Lavish public entertainments might feature balloon launches, fountains flowing with wine, or temporary temples and triumphal arches that doubled as launch pads for fireworks. Artists found ways to capture (and even improve upon) the grandest spectacles the capitals of Europe had to offer. At the time, vedute were the visual equivalent of the hyperbole found in breathless letters and published accounts of the events they portrayed. For example, the world traveler Lady Mary Wortley Montagu described the Venetian regatta of 1740 as “a magnificent show, as ever was exhibited since the galley of Cleopatra” in a detailed letter to her husband, which would have been circulated among their friends. Then as now, the goal was to inspire as much envy as possible.

Canaletto’s Venice: Feast Day of Saint Roch c. 1735 (National Gallery, London / Art Resource, NY)

Venice—with its distinctive topography, picturesque carnivals and regattas, and constant influx of tourists and dignitaries—was tailor-made for the vedute treatment. Images of curious Catholic rituals such as the Good Friday procession were especially exotic and appealing to Protestant tourists from Northern Europe. The most illustrious visitors to Venice were honored with a bull chase in the Piazza San Marco; imagine the running of the bulls at Pamplona with the added distractions of hunting dogs, elaborate costumes, an orchestra, and acrobats descending from the top of the Campanile. It was the kind of over-the-top extravaganza that had to be seen to be believed, and vedute painters capitalized on this uniquely 18th-century brand of FOMO.

Underlying the spectacle, however, was the simple pleasure of people-watching. Vedute are populated by casts of thousands; bystanders might include a colorful assortment of schoolchildren, hoop-skirted ladies, tradesmen, monks, coachmen, beggars, soldiers, and gondoliers. Artists also injected humorous vignettes, such as the elegant gentleman attempting to show reverence to the sacrament in a Corpus Christi procession while taking care not to let his silk-stocking-clad knee touch the ground, or the pair of dogs sniffing each other in the foreground of an ambassadorial meet-and-greet. The level of detail and narrative sophistication in vedute repays careful study; it’s no exaggeration to say you could spend hours looking at them.

But the photographic quality of vedute obscures the fact that artists rarely aspired to #nofilter naturalism, instead embellishing their subjects for visual and political impact. In one canvas, the Grand Canal makes a 180-degree turn, all the better to squeeze as many architectural landmarks as possible into the frame. Artists often amplified (or downplayed) physical reality for political gain. In one painting recording a diplomatic reception, Panini added architectural grandeur to a building he (or his patron) apparently deemed too modest, doubling its complement of pilasters. By contrast, in another canvas, Panini brazenly shrunk the monumental facade of Rome’s most famous building, St. Peter’s Basilica, to make the figure of his patron arriving at the church on horseback look more imposing. It was typical for the embellished pictorial and written records to eclipse the actual event. As the courtier Count Maurepas quipped: “Celebrations are never as beautiful as they are on paper”—an idea echoed in today’s “Instagram vs. Real Life” meme.

Luca Carlevarijs’s Regatta on the Grand Canal in Honor of Frederick IV, King of Denmark, 1711 (J. Paul Getty Museum)

Ironically, many of the supposedly “ephemeral” scenes vedute painters recorded for posterity still exist. Giuseppe Zocchi’s 1739 image of the Palio di Siena is easily recognizable from the horse race’s recent appearance in a James Bond movie; both the setting and the event survive more or less intact today. Modern Venice may teem with cruise ships rather than ceremonial barges, but the Oxford don Joseph Spence’s 1741 description of the Piazza San Marco as being so crowded that it looked “as if it were paved with heads” will resonate with anyone who’s been there lately.

Because of the commemorative nature of vedute paintings, they typically ended up a great distance from the iconic locales they depicted with such precision. The Getty show (which will travel this fall to the Minneapolis Institute of Art and from there to the Cleveland Museum of Art) unites pieces from far-flung museums, castles, and country houses. Most curators have a couple of Canalettos or Carlevarijses in their collections, but seeing dozens of them displayed together—something that’s never been attempted before, partly due to the complex logistics—offers compelling evidence that, even in the pre-Instagram age, people found ways to insert themselves into contemporary history, and manipulate it to their advantage.

ABC's Dirty Dancing Remake Is a Sad, Strange Production
May 24th, 2017, 06:37 PM

Inside ABC’s tonally bizarro update of the seminal 1987 romantic drama Dirty Dancing are about four different projects trying to get out. There’s the most obvious one, a frame-by-frame remake of the original that’s as awkward and ill-conceived as Gus Van Sant’s 1997 carbon copy of Psycho. There’s the one Abigail Breslin’s starring in, an emotionally textured and realistic coming-of-age story about a clumsy but engaging wallflower. There’s a musical, in which Breslin and Nicole Scherzinger mime along to their own singing voices in a strange dance rehearsal while half-heartedly exploring the idea that power emanates from the vagina. And there’s the most compelling story, a Wide Sargasso Sea-inspired spinoff starring Debra Messing as a lonely housewife coming to terms with the turbulent depths of her own desire.

What was ABC thinking? How could a simple remake go so wrong? How did the wholesome family location of Kellerman’s become a raunchy karaoke joint where Katey Sagal performs such a steamy rendition of “Fever” that an aghast Dr. Houseman tells his wife she needs to leave? Is that Jennifer Lopez’s former toyboy juggling watermelons? The questions, they abound. If you’re determined to tune in on Wednesday evening, rest assured there will be ample commercial breaks during the turgid three-hour running time to ponder all of them.

This made-for-TV remake, directed by Wayne Blair, is the latest in a fleet of extravagant television musicals, with ABC seemingly panicking in its rush to capitalize on a heaving new trend (its upcoming production of The Little Mermaid will be performed in October in a mind-bending amalgam of animation and live performance). Dirty Dancing, like Fox’s recent remake of The Rocky Horror Picture Show, suffers from being pre-taped, and therefore having nothing to distinguish it from the far superior original movie other than incessant advertising interludes. Breslin plays Baby, a bookish teenager heading to a Catskills vacation resort with her father (Bruce Greenwood), her mother (Messing), and her sister, Lisa (Sarah Hyland), who’s been inexplicably transformed from an abrasive antagonist to a sweet and supportive sibling.

For the first hour or so, everything is pretty much standard-issue imitation: Baby carries a watermelon, Baby crashes a party and becomes enamored with a pelvis-thrusting bad boy in leather (Colt Prattes), Baby learns to dance, amid churlish comments about her “spaghetti arms” and a soundtrack of ’60s classics. But even the songs, recorded by the likes of Karmin and Lady Antebellum rather than The Shirelles and Otis Redding, ring hollow. In the 1987 movie, the music evoked a sense of nostalgia for a bygone era. Now, the updated covers evoke nostalgia for nostalgia, an Inception-like feat of physics that only reminds you how much better this all was when Patrick Swayze was in it.

That’s not to insult Prattes, a Broadway actor who does his best with an impossible ask in emulating Swayze’s febrile, snake-hipped magnetism. His Johnny is convincingly vulnerable, masculine, and chippy, and he has negative chemistry with Breslin, whose performance as Baby seems to have been interpreted for a high-school Ibsen play rather than a TV musical. Swayze and Jennifer Grey famously hated each other, which perhaps sparked some of the passion in their scenes together; Breslin and Prattes have all the ritual awkwardness and squelched physicality of a father-daughter dance.

The most interesting part of Dirty Dancing is its expansion of Marjorie Houseman from a cheery and oblivious young Emily Gilmore to a tragic operatic heroine continually begging her husband to sneak back to their room for a quickie. This is presumably how Blair persuaded Messing to come on board, sweetening the deal with a musical number, in this case a sad-eyed performance of “They Can’t Take That Away From Me.” The ballad of Marjorie is amped up with the expanded presence of Vivian Pressman (Sagal), a predatory divorcée who shoves Rolexes into Johnny’s jeans and laments how she can’t sleep alone at night because the walls creak. It’s a fascinating psychosexual exploration of middle-aged female desire that’s completely at odds with everything else going on. More to the point, when Baby’s discovery of her burgeoning womanhood is superseded by her mother’s, there’s a problem.

All of this only gestures to how unlikely a hit the original was: a ’60s dance movie with ’80s costumes starring two relative unknowns that Roger Ebert dismissed as “a tired and relentlessly predictable story of love between kids from different backgrounds.” But for the most part, it kept things simple, relying on the physical energy of Grey and Swayze to spin a summer-lovin’ fantasy. This contemporary version, stuffed with subplots and extended dance sequences and terrible writing (“We’re all gonna be worm food, anyways,” Baby tells Johnny in one impressively lust-squashing shrug of a line) can’t decide whether it wants to emulate the original Dirty Dancing or transform it into Chekhov. Either way, it’s less the time of your life and more three hours you’ll never get back.

Don't Overinterpret The Handmaid's Tale
May 24th, 2017, 06:37 PM

As someone who likes to build up my capacity to imagine the worst, I’ve been finding The Handmaid’s Tale, the new television series adapted from Margaret Atwood’s 1985 dystopian novel, harrowing to watch. The show is an investigation into religious totalitarianism and patriarchy, and perhaps more interestingly a meditation on collaboration and complicity. I’ve been struggling with it because it seems, at times, so plausible, but also so far-fetched.

In creating the fictional Gilead—a theocratic regime that comes to power in the United States after falling birthrates and terrorist attacks lead to mass panic, then a culture of enforced sexual servitude—Atwood was issuing a warning. That the television series has come out in the era of Donald Trump has apparently helped make it a sensation. “What if it happened here in America?” viewers and critics are asking. Yet, something like Gilead couldn’t happen here, in part because it hasn’t happened anywhere.

Saudi Arabia, for example, might be an authoritarian theocracy—state law requires citizens to be Muslim and prohibits non-Muslim public worship—but it is not totalitarian. Various competing religious movements and networks operate, if unofficially, in the country, and complex tribal patronage systems provide routes for citizens to accrue resources from the state, as well as some degree of accountability.

Even the Islamic State, which does engage in sex slavery, otherwise diverges from the model of Gilead’s Christian fundamentalists. Gilead’s biblical judgments often seem laughably arbitrary and primitive (noncompliance is punished with eye-gouging, for example). ISIS is similarly comfortable with performative brutality, but it set up fairly complex and elaborate judicial and legal structures, including detailed tax codes and counterfeit statutes. The group’s interlocking sharia courts, binding fatwas, and economic regulations amount to what Yale University’s Andrew March and Mara Revkin term “scrupulous legality.”

A comparison to pre-modern Christian states may be more apt. In places like John Calvin’s Geneva, efforts to enforce moral discipline ranged from the obvious (punishing sexual deviance) to the odd (making Bibles available at pubs to encourage spiritual reflection). In the early 16th century, the Protestant reformer Ulrich Zwingli likened Christian life to “a battle so sharp and full of danger that effort can nowhere be relaxed without loss.” But pre-modern states, due to their lack of technology, surveillance powers, modern armies, and massive bureaucracies, were fundamentally different. Even when they wanted to be, they couldn’t be all-encompassing. They couldn’t be total.

Some liberals have managed to draw parallels closer to home, which has led to some absurdly mismatched comparisons. The New Republic’s Sarah Jones writes that “Texas is Gilead and Indiana is Gilead and now that Mike Pence is our vice president, the entire country will look more like Gilead, too.” No, Texas is not Gilead; it’s a state where people are peacefully and democratically expressing social conservatism. And as for the nation, Americans did just elect the most secular president perhaps in the country’s history.

As someone who wrote last year in The Atlantic that “it” could happen here, running through a number of worst-cases scenarios under a then-hypothetical President Trump, I believe it is sometimes just as important to argue that it can’t happen here. It is, of course, possible that the United States could experience a religious awakening, particularly if partisan polarization and Trump-style ethno-nationalism exhaust enough people. But the fact that Christian intellectuals like Rod Dreher and Russell Moore have resigned themselves for now to a “post-Christian” society—the idea being that Christians are an embattled minority that has lost the culture wars and that would be better off making a “strategic retreat” from America’s increasingly secularized public life—suggests that the time horizon for any such change is quite long.

But even if the United States did experience some kind of transformation of Christian consciousness, it wouldn’t—very likely couldn’t—be anything like the society described in Atwood’s novel. To suggest that this is a scenario worth taking seriously because it’s in the realm of possibility is to assume that expressions of public religiosity, regardless of how they are expressed, are automatically negative and something to be opposed. It’s to imply that conservative Christians are basically akin to totalitarians—or could theoretically become totalitarians—simply because they believe their faith has something important to say about public life and politics.

The leaders of Texas and Indiana may have retrograde views on gender, sexuality, marriage, and abortion, but they are democratically elected leaders nonetheless. To the extent that Texas is a socially conservative state, it is because voters in Texas are socially conservative—or at the very least there are enough of them who are comfortable with socially conservative policies. There is nothing intrinsically illegitimate about citizens of a state having what a liberal considers “bad” views, as long as they express them peacefully and democratically within the framework of the law and the constitution.

What makes Gilead, or for that matter any authoritarian theocracy, so terrifying isn’t just, or even primarily, the religious absolutism. It’s that religious laws, once promulgated, cannot be undone through the political process, because there is no political process. There are no elections and there are no opposition parties. There are no voters. Citizens have no recourse except to stay silent or to resist.

In other words, Christian evangelicals—or for that matter conservative Jews and conservative Muslims—who oppose abortion, gay marriage, or refuse to dine with women or men other than their spouses are not any less American. What would make them less American or un-American is if they believed, as a matter of faith, that democracy should be done away with and that there was only one truth that could be expressed by the state. Then the rest of us would have, quite literally, no choice. It is the closing of the avenues of possibility—and therefore of hope—that makes dictatorship, and not just the religious kind, so terrifying.

Why So Much Is ‘Bonkers’ Right Now
May 24th, 2017, 06:37 PM

On Friday, as a capper to a week that included a steady stream of breaking news about the doings of the Trump administration, Mother Jones sent a note of reassurance to its readers: “It’s Not Just You,” the magazine declared: “This Week Was Bonkers.” Vox, the same day, reporting on the movies of the Cannes Film Festival, announced that “Netflix’s Okja is a bonkers corporate satire starring Tilda Swinton and a superpig.” The Daily Beast, on Monday morning, wrote about David Lynch’s newly returned show, reporting that “Twin Peaks Is Back and More Delightfully Bonkers Than Ever.” The conservative political strategist Rick Wilson recently described the current situation of many of the president’s supporters in the government: “They’re afraid of Donald Trump going crazy,” he said—“you know, ripshit bonkers on them.”

“Ripshit bonkers” is an especially felicitous turn of phrase—Wilson later told the linguist Ben Zimmer that, as far as “ripshit” went, “my first memory of that word was from my (very) German great-grandfather when I was a child”—but “bonkers” requires no extra decoration. It describes things that are amusingly wacky, and, in the least literal of ways, insane: In some small sense, the past week was crazy. The movie was crazy. The new Twin Peaks is crazy.

But “crazy” is a fraught word, these days—and only partially because of its long history of undermining women, as a group. To call a movie or a TV show or a news event “crazy” is also to make light, or at least to run the risk of being seen as making light, of mental illness. Same with “insane.” Same with “wacko.” While “bonkers,” too, has a whiff of that connotation—“crazy, mad” is the brief definition Merriam Webster offers of “\ˈbäŋ-kərz,” and Urban Dictionary helpfully connects the adjective to, among others, “batty,” “bananas,” “cracked,” “crazed,” “demented,” “flipped,” “insane,” “maniacal,” “screwball,” and “unhinged”—it is farther removed from mental illness than its many semi-synonyms. “Bonkers,” coming as it does from the verb “bonk,” has a certain zaniness written into it, suggesting craziness of a decidedly whimsical strain. “Fans went bonkers when their team won” is how Merriam-Webster uses the word in a sentence.

So “bonkers” has risen steadily in English usage in recent decades, in some part because, as Bob Dylan might have put it in an early draft of the song, the times, they are a-bonkers. The Huffington Post, in March, offered “4 Reasons Why Trump’s Budget Is Bonkers.” The comedian Jennifer Saunders titled her recent memoir Bonkers: My Life in Laughs. The New York Times columnist Charles Blow, last year, argued that “‘Bernie or Bust’ Is Bonkers.” Cracked lists “4 Ways A Normal American Day Is Absolutely Bonkers to Others.” Jezebel features a “Bonkers” tag. And people on social media regularly assess the news in terms of its relative bonkers-ness—using the colorful adjective at once to undermine events as happenings and to elevate them as entertainments.

Another thing that gives “bonkers” its appeal in American English: The word is imported from the British version of the language, in the rough manner of “cheeky,” and “fancy,” and “twit,” giving it the soft sheen of the foreign. “Bonkers” seems to have appeared in the U.K., for the first time, around 1945: That year, a Daily Mirror article noted, “If we do that often enough, we won’t lose contact with things and we won’t go ‘bonkers.’” (What activity “that” was referring to has, sadly, been lost to time). John Osborne’s 1957 play The Entertainer used it (“We’re drunks, maniacs, we’re crazy, we’re bonkers, the whole flaming bunch of us”), as did Kingsley Amis’s Take  Girl Like You, in 1960: “Julian’s absolutely bonkers, too, you know.”

The precise etymology of “bonkers” is unknown, but it likely came about the way Eric Partridge, in A Dictionary of Forces’ Slang, hinted at in his 1948 definition of the term: “Bonkers, light in the head; slightly drunk. (Navy.) Perhaps from bonk, a blow or punch on the bonce or head.” (“Bonkers” might also be connected to the other verbal senses of “bonk”—to have sex with, as in Mary Roach’s delightfully titled book, or more recently, in endurance sports, to hit a wall.) The word, with its multi-dimensional utility, quickly crossed over to the U.S.; its first known citation in America came in 1965, from the New York Times reporter Israel Shenker, who availed himself of the word’s alliteratively poetic possibilities: “In Paranoia, his newest picture,” Shenker wrote, “Italy’s Marcello Mastroianni goes slowly bonkers sharing bath, bed, and Bedouin with three co-stars.”

Since then, “bonkers” has enjoyed life on both sides of the Atlantic, as an adjective and a proper noun: It has given its name to a board game, and an animated TV show, and a children’s party venue in Columbia, Missouri, and a late, lamented brand of chewy candy:  

In the TV commercials for the Starburst-esque treats, Bonkers’ tagline was “Bonkers! Bonks you out!” And that is the sense that is often employed today, as American citizens—members of the media both professional and not—take a look at the world swirling around them and decide that things are, indeed ... yeah. The media? Apple’s new campus in Cupertino? A conspiracy theory about Avril Lavigne? The president? Bonkers all, the people decide. And yet—here is one more benefit of the bonk—the people haven’t, in the end, decided much at all.

“Bonkers” is, despite its zeal for the zany, notably hesitant. It resists making a value judgment. It throws up its hands. “Crazy,” even disentangled from its psychological sense, has a moral valence. So does “insane.” “Bonkers,” though, marvels at the thing while doing very little to judge the thing. It suggests a kind of assessment fatigue on the part of its user, a tendency to find current events not straightforwardly good or bad, but simply abnormal. So the Brits, nearly a century ago, created a word would become uniquely suited to this American moment: a time when news can be so often confusing, and overwhelming, and, all in all, a little bit bonkers.

Roger Moore, Sultan of Self-Deprecation
May 24th, 2017, 06:37 PM

If the only work of Roger Moore’s you’ve encountered is his 12-year stint playing the British super-spy James Bond, rest assured you’re not missing much. This isn’t as callous as it sounds: Moore, who died on Tuesday at the age of 89, was the first person to assert that his range as an actor was limited, and that he shaped his characters into himself rather than the other way around. “My James Bond wasn’t any different to my Saint, or my Persuaders or anything else I’ve done,” he told The Telegraph last year, referring to the two television shows that preceded Bond. “I’ve just made everything that I play look like me and sound like me.”

So his Simon Templar—honey-smooth and jauntily eyebrowed, hair lacquered into submission—was much the same as his Ivanhoe. Even when Moore accepted a role on the fourth season of Maverick, the most quintessentially American show imaginable, he retained his English accent, and the show was left to weakly posture that his Texan character had simply picked up some British mannerisms after a few years overseas. An American accent for Roger Moore? Preposterous.

Moore, then, was a movie star in the old mold. No method-acting antics or extreme diets for this former knitwear model (he did, reluctantly, lose a few pounds and cut his hair when he was first cast as 007). A Roger Moore character doesn’t exude physical menace at his enemies so much as witheringly reduce them into puddles of regret with his disdain and his impeccable tailoring. Daniel Craig, Moore once told an interviewer, “looks like a killer. Whereas I look like a decrepit lover.” It was this wry gift for self-deprecation, and a refusal to take himself too seriously, that made Moore one of the most enduring, endearing actors of the 20th century. In fully owning his limitations, he only made his uniquely debonair charm more indelible.

Consider his Bond. If Sean Connery’s 007 was a louche and sexually predatory brawler (who, it should be noted, first pioneered the RompHim), and Timothy Dalton was a monotonal frown in a tuxedo, Moore’s Bond was a tall, graceful, distinctly sommelier-like Bond whose primary skills were unflappability and skiing. He wasn’t entirely convincing as a seducer (Chris Klimek has neatly summed up “his terrifying, accordion-lipped kissing method”), but only because, like a male lion, he often seemed too lazy to aggressively pursue women. Watch this supercut of Moore’s seven appearances as Bond and you’ll observe how minutely his expression shifts from blank detachment to blank concern to blank amusement. As Moore himself told Maureen Dowd, he mostly saw acting as being prepared to “get up early, say your lines, and not trip over the furniture.”

And yet, for several generations of Bond fans, Moore’s 007 was the one to beat. He had the gadgets. He had the most manifestly bizarre locations (everywhere from an underwater superlair to outer space). He had the totally incomprehensible scenes with Margaret Thatcher impersonators. And, crucially, he had the ability to pull off all the above without diminishing his dignity. One of his favorite quips was that he was too cheap as an actor to be replaced, but that undermines how adept he was at making even the most flagrantly ludicrous plots (megalomaniacs gassing the whole planet, megalomaniacs drowning all of Silicon Valley, megalomaniacs sparking nuclear war and establishing new colonies underwater) engaging. As A. O. Scott wrote Tuesday, “He knew exactly how silly these endeavors were, but he was committed to them all the same.”

Moore brought this same fusion of winking irony and self-satire to his post-Bond roles, after a five-year break from acting following A View to a Kill. In 1997’s Spice World, a shouting, cotton-candy glitterbomb of a movie, Moore mocked his Bond days by playing the enigmatic, smoking jacket-sporting, martini-shaking, cat-stroking head of a record label. In 2002’s Boat Trip, Moore’s rapaciously sexual character, Lloyd Faversham, attempted to seduce Horatio Sanz’s character at the breakfast table by offering him a bite of “my sausage.” The movie was universally panned and lambasted for homophobia, but Roger Ebert praised Moore as “the one ray of wit in the entire film ... a homosexual man who calmly wanders through the plot dispensing sanity, as when, at the bar, he listens to the music and sighs, ‘Why do they always play Liza?’”

It’s this dogged consistency, rather than a particular talent for transformation, that defined Moore’s career as an actor, as well as a genial attitude toward reporters and fans. Regardless of the vehicle, or whether his co-star was David Niven or Melanie C, Moore’s output and temperament remained reliable. Since the news of his death broke, many have remembered him as a kind and witty person, a UNICEF ambassador, a generous friend, and a consummate gentleman. Reviewing his autobiography in 2008, the Telegraph noted that “little, if any, dirt is dished; Moore is clearly a very nice man who prefers to nurse resentments privately. And if, on the one hand this makes his memoirs a little short of drama … what we do get is the amused voice of an endlessly cheering actor who was always very much better than either he or his critics ever thought.”

The Ideology of an Ariana Grande Concert
May 23rd, 2017, 06:37 PM

Among the many sickening aspects of the bombing that killed 22 people at an Ariana Grande concert in Manchester, England, Monday night is the sense of a pattern. Ever since the November 2015 Paris attacks that claimed lives at a rock concert and soccer match, violent Islamic extremists have continued making mass entertainment events one of their primary targets. There was the Pulse massacre in Orlando and the street-festival truck attack in Nice, but also killings at nightclubs in Istanbul, Kuala Lumpur, and Tel Aviv.

There’s no doubt a logistical rationale to assaulting these “soft targets”—they may be vulnerable, and bloodshed at them can inspire a particular kind of fear among civilians. But it stands to reason there’s an ideological motive too: A culture is embodied in its gatherings and in its entertainments. The particular implications of targeting musical events, which are almost inevitably bound up with art’s larger humanitarian project, have been widely noted.

Attacking Grande’s concert has a few other implications, regardless of the extent to which those implications were clear to the attacker, about whom little is yet known other than that ISIS has claimed responsibility. There’s really no exaggeration in saying Grande stands for freedom—female freedom, and also the general freedoms of expression the liberal West aspires to embody.

Grande’s fan base skews female and young, and my colleague Sophie Gilbert writes that the bombing “reminds girls and young women that there will always be people who hate them simply because they were born female.” Compounding that is how the concert itself celebrated female liberation. Grande sings frankly about enjoying independence and sex, and has a reputation for tussling with commenters who call her “whore” or define her by her relationships with famous men. Her most recent album is titled Dangerous Woman. A tweet from December 2016: “expressing sexuality in art is not an invitation for disrespect !!! just like wearing a short skirt is not asking for assault.”

More broadly, Grande’s narrative in general is one of self-determination and savvy capitalist striving. A former child actor, she grew up in the public eye while showing a remarkable amount of poise, using her remarkable vocal talents and breezy charm to maintain a unique persona while also ladling in ever-greater expressions of maturity. The pop music she makes is, for people who love pop, some of the best of recent years because of the way that it transcends anonymity on its way to fun. Religion-wise, she’s chosen her own path: Grande was raised Catholic but says she left the church after realizing it would not accept her brother Frankie, who is gay. In 2014, she began practicing Kabbalah, a mystical Jewish tradition. Politically, she has been outspoken as well, supporting Hillary Clinton for president and attending the Women’s March.

Such a career, especially for a woman, is obviously predicated on the values and openness that ISIS opposes. Yet following this attack, some American voices have made an issue of Grande’s persona as well. The talk-radio host and 2008 Libertarian vice presidential nominee Wayne Allyn Root sent out his condolences with the addition “BUT she is typical Hollywood lib. Still hate America?” Mike Cernovich, the prominent alt-right pundit, tweeted and then deleted an image quoting Grande saying “I hate America, I hate Americans.”

These are references to the one moment in Grande’s rise that approached the level of political scandal: When she was caught on camera in a donut shop licking one of the treats and then jokingly saying “I hate America, I hate Americans” to a friend. Her public apology later insisted that she actually loved the U.S., but didn’t love its childhood-obesity crisis or how “we as Americans eat and consume things without giving any thought to the consequences.”

It was a silly moment—one that showed not actual hatred of America but rather a young woman’s comfort within the country as she privately took advantage of freedom of speech. The grim irony is now we’re reminded that people who actually do hate America and the West see the likes of Grande, and those who look up to her, as exactly their enemy.

The Bachelorette: Memes for the Right Reasons
May 23rd, 2017, 06:37 PM

They filed in, one by one: Men who warmly hugged Rachel Lindsay, The Bachelorette’s latest star; men who politely kissed her; men who were awkward; men who were charming; men who were in possession of jaws that were shockingly square. One of them, a singer-songwriter, strode across the perma-wet driveway of the Bachelor mansion playing a guitar and balladeering. Another came armed with a megaphone and a catchphrase (“Whaboom!”). Another came playing drums, accompanied by a full marching band. Another came dressed as a penguin (since, he explained, they “mate for life”). And one of the men, Adam, came with a dummy—a literal one, to be clear: a stuffed doll, about three feet tall, his facial features drawn on, his wig brown, his suit blue. His name? Adam Junior.

“AJ,” as the doll would soon be nicknamed by the show, its contestants, and its viewers, was meant to resemble Adam, a 27-year-old real-estate agent from Dallas and a guy who, in his pre-show questionnaire, described his favorite actor as Jennifer Lawrence (“because she is every girl’s goal”) and the most romantic gift he’d ever received as a threesome (“it was my birthday”).

While Adam, for the most part, blended in with the other square-jaws vying for Rachel’s heart, Adam Junior quickly became the Bachelorette premiere’s guest star—and, in that, he made a show that is known largely for its tongue-in-cheek take on “reality” more directly surreal than it has ever been before. The frozen-faced dummy got more air time than Adam, and indeed more than any of the human guys Rachel met. He got his own theme music (Gothic organs mixed with bomchikawahwah synths). And he got his own plot line, too: AJ, silently pining for Rachel. AJ, whose love is deep and pure and impossible. AJ, the Montague to Rachel’s Capulet, his feelings destined to remain unrequited on account of him being made of upholstery.

And: AJ, the doll destined to become a meme.

On Monday, the show’s roving camera panned across the Bachelor patio as AJ, silent, watched Rachel flirt with the guy in the penguin suit. The camera cut back to AJ, in a “talking” head interview, his smile ambiguously Mona Lisan, his heart unambiguously full. “Je n’ai jamais vu une telle beauté,” the doll murmured, via voiceover, as “I have never seen such beauty” flashed in subtitles on the screen below him. And then: “She ignites a fire in my soul.” And then: “I am awakened.” And then: “I pray she feels the same.”

Later, the show’s cameras would catch AJ third-wheeling a conversation between Rachel and a suitor, the humans sitting on a couch, the doll stretched languorously before a roaring fire. They’d catch him sitting on the couch with his “fellow” contestants. They’d catch him staring blankly into the middle distance, lovelorn and sad.

It was all supremely bizarre—even by the standards of The Bachelorette, a show that belongs to a franchise whose editors once superimposed the glowing face of a contestant’s dead dog atop an image of a Mexican beach.

But AJ’s party-crashing also, of course, made so much sense. AJ, you will be unsurprised to learn, was a hit online, in the backchannel discussion that is so much a part of the Bachelor franchise’s appeal.

The Bachelorette’s producers provided ample opportunity for that kind of conversation: AJ appeared again and again during the show’s premiere, the subject of sympathy, conversation, and decided confusion among the show’s contestants—some of whom were in on the joke, others of whom seemed as surprised as viewers to see San Fernando temporarily transformed into the Valley of the Doll.  

“That’s like, low-key creepy,” one of the men said, marveling at the dummy who slumped on a couch in the Bachelor mansion, as AJ’s signature, organ-laced music played. Someone had put a glass of champagne in AJ’s limp hand, where the glass of liquid remained, unmoved, even its bubbles still.

“Oh, it’s beyond low-key,” another replied.  

Josiah, an early front-runner, was more magnanimous. “I really think that everybody’s here for love,” he said in a talking-head interview, “including AJ. He doesn’t have to say much for you to know what he’s thinking. And I can tell he’s getting a little jealous. He wants to get some time with Rachel.”

Kenny, a professional wrestler from Las Vegas, was more ambivalent: “AJ’s dressed fresh,” he admitted. “He’s got actually a pretty dope fade ... but I would say if he turns into Annabelle and, like, moves into different rooms, I’m gonna burn it myself.”

Most striking, though, was Rachel’s reaction to the floppy prop who took over her season premiere. “Can we talk about Adam Junior for two seconds?” Rachel asked Adam, his owner and her suitor. “He scares me.”

“No, he’s okay,” Adam replied.

“He makes me nervous,” Rachel insisted. “I have a thing with dolls.”

“No, don’t be nervous,” came Adam’s answer.  

With that, Rachel, in the clearest terms possible, told Adam that his schtick scared her, and made her nervous. And with that, too, in the clearest terms possible, Rachel was ignored—not just by Adam, but by the show that is supposed to be hers.

It was not a good way to begin, especially given Rachel’s status as the first woman of color to be the show’s star. But it was also an unsurprising way to begin. The Bachelor franchise has long been concerned with extending its reach beyond the televised show itself—via ABC.com-sponsored fantasy leagues, via contestants’ presence on social media, and most readily via episodes that lend themselves to emoji-laced conversations. AJ, the doll who dares you not to meme him, is simply one more extension of that impulse.  

Here’s one more thing you’ll be unsurprised to learn: AJ, the doll, has a Twitter account. Its handle is @adamjrthedoll. It was live-tweeting throughout the show. One of its messages serves as a caption of the picture of AJ stretched before a fire. The tweet reads, “I’m just a doll looking for love, here for the right reasons, and excited about my journey.”

The Horror of an Attack Targeting Young Women
May 23rd, 2017, 06:37 PM

Every terrorist attack is an atrocity. But there’s something uniquely cowardly and especially cruel in targeting a venue filled with girls and young women. On Monday night, a reported suicide bomber detonated a device outside Manchester Arena, killing 22 people, many of whom were children. The victims had gathered at the 21,000-seat venue to see the pop musician Ariana Grande, a former Nickelodeon TV star whose fan base predominantly includes preteen and teenage girls. The goal of the attack, therefore, was to kill and maim as many of these women and children as possible.

How can you respond to such an event? Like the shooting at Sandy Hook Elementary School in 2012, it’s something so horrific in intent and execution that it boggles the mind. And like the 2015 attack claimed by ISIS at the Bataclan theater in Paris and the shooting in Orlando last year, the Manchester bombing was targeting people who were celebrating life itself—the joy of music and the ritual of experiencing it as a community. For a number of children at the Grande concert, it would have been their first live musical event. Images and video of the aftermath of the bombing, depicting teenagers fleeing from the event, reveal some still clutching the pink balloons that Grande’s team had released during the show. The youngest confirmed victim of the attack, Saffie Rose Roussos, was 8 years old.

After attacks like this, there are always calls for the public not to be afraid. This is right, and logical—the purpose of acts of terrorism is to spread fear, so it’s correct that officials entreat citizens to resist that fear and keep going about their daily lives. But the Manchester bombing delivers another message, too. It reminds girls and young women that there will always be people who hate them simply because they were born female. As a performer, Grande is a totem of irrepressible force and burgeoning sexuality. As my colleague Spencer Kornhaber wrote last year, her 2016 album Dangerous Woman injects “darker flavors into bubblegum,” with Grande exploring themes of discovering her own power, and her own vulnerability. To target her show is to target thousands of her fans who are considering their own emerging sexuality through the prism of her music.

As the NPR critic Ann Powers wrote on Facebook, attending a concert as a teenage girl is a heady and potent experience. “The best night of your life, girl version:” she wrote, “a ticket in an envelope you’ve marked with glitter glue, putting on too much of the eyeshadow you bought at the drugstore that day, wearing a skirt that’s shorter than your school uniform, telling your mom it’s okay and you’ll meet her right after the show … dancing experimentally, looking at the woman onstage and thinking maybe one day you’ll be sexy and confident like her, realizing that right this moment you are sexy and confident like her.” This is the essence of what so many girls in Manchester Arena were feeling, before some of them were murdered.

Many will point out that this was a terrorist attack, and terrorist attacks target everyone, regardless of gender, or age, or iTunes playlists. There were plenty of men at the concert, too, including a 64-year-old grandfather who was struck in the face by broken glass as he waited to collect his granddaughter. And until more information is released about the attacker, we can only speculate about his motives, although ISIS has been swift to claim responsibility. But the venue he chose to target speaks volumes. The impulse to hate and fear women who are celebrating their freedom—their freedom to love, their freedom to show off their bodies, their freedom to feel joy, together—is older than ISIS, older than pop concerts, older than music itself.

What Saturday Night Live's Departures Mean for Its Future
May 23rd, 2017, 06:37 PM

In the past, departing Saturday Night Live cast members have gotten whole sketches devoted to sending them off. Kristen Wiig was serenaded with song and dance from Mick Jagger and the rest of the crew; Bill Hader’s Stefon finally married Seth Meyers; Will Ferrell got a series of testimonials. On last weekend’s 42nd season finale, the show said goodbye to three cast members with varying tenures and legacies: Bobby Moynihan, Vanessa Bayer, and Sasheer Zamata. The first got a goodbye sketch of sorts, the second a couple of featured roles on her last night, and the third no acknowledgement at all. It was a slightly muddled end to what feels like one of SNL’s weaker eras—even as the show breaks ratings records in the age of Donald Trump.

The sendoffs of SNL legends like Wiig, Hader, Fred Armisen (he got a goodbye punk show), and Andy Samberg (a special 100th Digital Short) had a sort of choreography to them—Wiig and Samberg left in 2012, Hader and Armisen in 2013, and Meyers in 2014, slowly closing the book on a remarkable run for the show. The departure of this era’s bedrock actors has been messier—Taran Killam and Jay Pharoah were surprisingly axed last year, Bayer reportedly decided not to pick up her contract after seven years, and Zamata, it seems, left more unceremoniously (her departure was only announced after the last episode, rather than before it).

Moynihan is the only one whose time definitively seemed to be up. Though he was never one of the leading stars of the show, he’s unquestionably a first-ballot Saturday Night Live hall of famer. His nine years as a cast member is topped by only a handful of others (Meyers, Armisen, Darrell Hammond, Tim Meadows, and the eternal Kenan Thompson), and he originated a pantheon-level character along the way—Drunk Uncle, a Weekend Update mainstay who was fond of uttering vaguely offensive malapropisms as he burped on his sweater and offered his take on the latest news.

But Moynihan was much more than Drunk Uncle (who made a final appearance last weekend but had been semi-retired for a long time). In his early years on the show, he was an exciting, strange addition to the cast who delighted in playing boisterous and outlandish sketch characters. I was a huge fan of his obnoxious Uno Pizzeria waiter Mark Payne (“smells like pepper!”), the sobbing son of Hader’s Italian TV personality Vinny Vedecci, and the overzealous flirter Janet Peckinpaugh (who described herself as “a flesh cube”).

Later, Moynihan evolved into the show’s most reliable hand, a veteran who could effectively improvise if a sketch started out on the wrong foot (witness his ad-libbing as a cast member arrives late to “Space Pants”), and who discovered an adeptness for playing vacuous TV personalities (I think “Mornin’ Miami” is one of the best written, best performed sketches in SNL’s last decade). Beyond that, he was an obvious and devoted fan of the show who’s certain to return as a host someday soon.

But Moynihan, thanks to his long tenure, had feet in both the Wiig/Hader/Samberg era and the newer one defined by Kate McKinnon, Beck Bennett, and Leslie Jones. Bayer was probably one of the latter group’s most familiar faces. She was an quick hit in her first season with her Miley Cyrus impression (one that now seems hilariously outdated) and was always called on by SNL to play the uncool: the stuffy mom, the confused straight man, Jacob the Bar Mitzvah Boy. Bayer might have gotten a little boxed in by the end, but she was another reliable part of the firmament. Though she doesn’t have a sitcom job lined up as Moynihan does, she undoubtedly has a fruitful career ahead of her.

Zamata’s hiring came in the middle of the 2013-2014 season (SNL’s 39th). It was a rare panic hire for the show’s cool-headed producer Lorne Michaels, in response to frequent and growing criticism about the show’s lack of non-white actors, particularly black actresses (there was no one on staff to do a Michelle Obama impression, among many others). Zamata is primarily a stand-up comic (her new special, Pizza Mind on Seeso, is very funny), and stand-ups often struggle to meld with SNL’s long-refined approach to making sketch comedy.

Though Zamata did a lot of impressions and was regularly featured on the show, it was almost always as a background player. She rarely appeared as the star of sketches, and never found the kind of recurring characters that define a cast member’s time on SNL. A lot of stand-up cast members get chances to present bits of their act as “desk pieces” on Weekend Update (Pete Davidson and Jones do so regularly), but Zamata never did. She may, at some point, offer her take on what it was like to work on the show and why she struggled to break out, but as it is, her four-year tenure remains somewhat inscrutable (it was probably her decision not to announce her departure earlier).

All of these changes leave SNL with some major casting decisions ahead of it. On the one hand, the show’s ratings are better than ever, buoyed by the presidential election and many a viral political moment. On the other hand, so much of this season was defined by SNL’s guest stars, from Alec Baldwin as Trump to Melissa McCarthy as Sean Spicer. The season finale’s guest host, Dwayne Johnson, was accompanied by Baldwin and Tom Hanks for his opening monologue; the episode’s most remarked-on moment was probably Hanks reprising his beloved new character “David S. Pumpkins” for a hot second.

Bayer and Moynihan would be crucial parts of several sketches a night; Zamata, though more unheralded, was herself a valuable and seasoned performer, if rarely a star. The show seems desperately in need of new blood, even as McKinnon’s star continues to rise: Of the ensemble around her, only Jones and the ever-reliable Thompson are like true standouts. In a way, SNL’s latest era seems to be over before it got a chance to really begin. But with ratings as high as they’ve ever been, it’s unclear whether Michaels or NBC will feel much pressure to really shake things up.

Roger Moore, Star of James Bond Films, Dies at 89
May 23rd, 2017, 06:37 PM

Roger Moore, who brought out James Bond’s wry side in seven films featuring 007, and before that was known for his portrayal of Simon Templar in The Saint, has died. He was 89.

Moore’s death after a brief battle with cancer was confirmed by his children in a statement.

Moore’s last appearance in a Bond film was in 1985’s A View to  Kill, and in the years since then he appeared in a smattering of movies, and spent much of time doing humanitarian work for which he was named a UNICEF goodwill ambassador in 1991. His approach to work was summed up in this quote to The New York Times in 1970, well before he achieved global stardom for the Bond movies. “Noel Coward once gave me a memorable bit of advice—‘Accept every thing,’” Moore told the newspaper. “If you're an actor, keep working. That's what Coward said. Glorious advice.”

Moore had taken that advice to heart. Following a short stint as a model in the 1950s, Moore appeared in small roles on television and films with little success—commercial or critical. His acting breakthrough came in 1958 with Ivanhoe, a British TV show based on the novel by Sir Walter Scott. Other roles followed, but the show that made him a household name came in 1962 with The Saint. He played Simon Templar, a sophisticated thief that fights villains with his wits and bon mots, for seven years in the series based on the Leslie Charteris novels.

It was a role that came in handy when the producers of the Bond movies needed a new star. Sean Connery, who was Moore’s acting rival for years, had made the Bond franchise a tremendous international success, but he’d left the series after six official films. George Lazenby, the Australian who portrayed 007 in the much-maligned (unfairly) On Her Majesty’s Secret Service, quit the role—or was fired depending on the version—after one movie. Bond’s producers turned to Moore.

Moore first played Bond in 1973’s Live and Let Die and continued playing him until A View to a Kill in 1985, when he was 58. Along the way, he transformed the role that Connery had portrayed with subtle menace and Lazenby with emotion, into one that possessed wit—and reflected the taste of the era: He wore digital watches in several outings and drove a Lotus Esprit S1.

His acting was often criticized, and he publicly, at least, didn't seem to take it to heart.

“My acting range has always been something between the two extremes of ‘raises left eyebrow’ and ‘raises right eyebrow,’” he is once reported to have said.

He didn’t act for the five years after his retirement as Bond, but returned to movies, often appearing as an English aristocrat. He was better known for his humanitarian work in his later years, and was honored with awards from several governments, including a knighthood in 2003.

Moore, who had lived in European tax havens since the 1970s, was married four times. His marriages to Doorn Van Steyn (1946-53), Dorothy Squires (1953-68), and Luisa Mattioli (1969-96) ended in divorce. He married his companion Kristina Tholstrup in 2002. Tholstrup survives him, as do three children from his relationship with Mattioli.

Why Remix The Birth of a Nation?
May 23rd, 2017, 06:37 PM

In some dark corners, The Birth of a Nation might be received as enthusiastically today as it was when it debuted in 1915. The silent dramatization of the assassination of President Abraham Lincoln and the rise of the Ku Klux Klan during Reconstruction was the first American motion picture to be screened at the White House, with President Woodrow Wilson and his cabinet in attendance. While violent racism is not tolerated so openly as it was during Wilson’s day, vintage white nationalism is making a comeback in the Trump era.

Richard Spencer, the most prominent white supremacist in America, led a group of torch-bearing demonstrators last week to protest the removal of a Confederate monument in Charlottesville, Virginia. During the last month, neo-Confederate alt-right rallies have popped up in Lexington, New Orleans, and other cities, like the opening scenes of a dark reboot of D.W. Griffith’s pioneering piece of propaganda. The Birth of a Nation is as relevant now as it has been at any point over the last century.

That’s why, on Tuesday, the artist and musician DJ Spooky is performing his own version of The Birth of a Nation at the John F. Kennedy Center for Performing Arts. Rebirth of a Nation, his multimedia reimagining of the silent film, includes an ambitious soundtrack performed live, much as the original 1915 screenings sometimes did. It’s a piece that he’s staged on occasion since 2004. Now, with the renewed prominence of virulent white supremacy, the themes resonate more strongly than they did just a summer ago, when he staged the piece at Chicago’s Millennium Park.

“These things are all heartbreakingly, eerily, part of the contemporary landscape,” says DJ Spooky, also known as That Subliminal Kid, or by his given name, Paul D. Miller. “It’s not so far in the rearview mirror.”

For his performance at the Kennedy Center, the Washington, D.C., native will appear on stage with three screens. He’ll be remixing the visuals, manipulating the original film, and adding snippets of touched-up or contemporary video. Miller composed an original score for Rebirth of a Nation to be performed live by two violins, viola, and cello, motifs that he samples and loops with beats to create a sonic soundscape. (Kronos Quartet frequently performs Rebirth of a Nation with Miller, but for the Kennedy Center performance he will be joined by a D.C. ensemble called Sound Impact.)

(DJ Spooky/The Kennedy Center)

Miller tells me that he looked to Joseph Carl Breil for inspiration for the score. Breil, the son of a Prussian immigrant and one of the first composers to make music specifically for films, composed a three-hour soundtrack for the original Birth of a Nation. Miller describes it as an early, pivotal accomplishment in remix culture. Breil borrowed from both Dixieland tunes and traditional composers such as Richard Wagner for his score, combining vernacular heartland music with classical continental melodies. In fact, Miller attributes Hollywood’s embrace of Wagner at least in part to Breil’s popular adaptation of his themes.

“I wanted as much as possible to think about the trajectory of The Birth of a Nation through the mass-media landscape,” he says. “Francis Ford Coppola uses ‘Ride of the Valkyries’ in Apocalypse Now. He says that he was inspired to do that by watching Birth of a Nation. You have Star Wars. George Lucas said he studied Birth of a Nation’s battle scenes for inspiration for Star Wars. The Imperial March, dum, dum, dum, dut-dut-a-dum—that’s an appropriation of Wagner as well.”

Miller shares an academic sense of admiration for the technical artistry of the work of Griffith (and Wagner, and Breil). Rebirth of a Nation is Miller’s own Gesamtkunstwerk, the Wagnerian term for “total art”—at least, in scope, it is his most ambitious multimedia project to date. He frames his performance as a protest or a piece of “counter-propaganda,” but also as a project that struggles seriously with its source material. “By using their tools against themselves, you get some intriguing effects,” he says.

Miller’s work to adapt The Birth of a Nation led to an even broader historical project. He is the executive producer for Pioneers of African-American Cinema, a five-volume collection of digitally restored cinematic works by early black filmmakers from the 1920–40s. Released last summer and now streaming on Netflix, the collection draws on film archives from the Library of Congress, the Museum of Modern Art, the University of California Los Angeles Film & Television Archive, and other libraries. Pioneers compiles almost 20 hours of so-called “race films.” Miller and other musicians, including the composer Makia Matsumura and the late drummer Max Roach, contributed new and original scores for silent works.

The Birth of a Nation was another “race film,” one that was received simply as a film in its day. Miller’s Rebirth of a Nation is in the simplest sense an effort to highlight how its skewed imagery still persists a century later. More critically, it’s a look back at the dawn of alternative facts in a moving-pictures format. The groundbreaking film was a blockbuster hit with popular audiences, even though it was reviled by critics and led to protests by the newly formed NAACP. Its white supporters answered black protesters not with counterarguments but with violence, riots, and even murder.

“The whole idea here is that cinema deeply conditions our response to the everyday world. Everyone is watching different kinds of news. It’s like multiple parallel universes, where you have no authentic engagement of facts or reality,” Miller says. “Birth of a Nation was considered to be a true story.”

How One Hundred Years of Solitude Became a Classic
May 22nd, 2017, 06:37 PM

In 1967, Sudamericana Press published One Hundred Years of Solitude (Cien años de soledad), a novel written by a little known Colombian author named Gabriel García Márquez. Neither the writer nor the publisher expected much of the book. They knew, as the publishing giant Alfred A. Knopf once put it, that “many a novel is dead the day it is published.” Unexpectedly, One Hundred Years of Solitude went on to sell over 45 million copies, solidified its stature as a literary classic, and garnered García Márquez fame and acclaim as one of the greatest Spanish-language writers in history.

Fifty years after the book’s publication, it may be tempting to believe its success was as inevitable as the fate of the Buendía family at the story’s center. Over the course of a century, their town of Macondo was the scene of natural catastrophes, civil wars, and magical events; it was ultimately destroyed after the last Buendía was born with a pig’s tail, as prophesied by a manuscript that generations of Buendías tried to decipher. But in the 1960s, One Hundred Years of Solitude was not immediately recognized as the Bible of the style now known as magical realism, which presents fantastic events as mundane situations. Nor did critics agree that the story was really groundbreaking. To fully appreciate the novel’s longevity, artistry, and global resonance, it is essential to examine the unlikely confluence of factors that helped it overcome a difficult publishing climate and the author’s relative anonymity at the time.

* * *

In 1965, the Argentine Sudamericana Press was a leading publisher of contemporary Latin American literature. Its acquisitions editor, in search of new talent, cold-called García Márquez to publish some of his work. The writer replied with enthusiasm that he was working on One Hundred Years of Solitude, “a very long and very complex novel in which I have placed my best illusions.” Two and a half months before the novel’s release in 1967, García Márquez’s enthusiasm turned into fear. After mistaking an episode of nervous arrhythmia for a heart attack, he confessed in a letter to a friend, “I am very scared.” What troubled him was the fate of his novel; he knew it could die upon its release. His fear was based on a harsh reality of the publishing industry for rising authors: poor sales. García Márquez’s previous four books had sold fewer than 2,500 copies in total.

The best that could happen to One Hundred Years of Solitude was to follow a path similar to the books released in the 1960s as part of the literary movement known as la nueva novela latinoamericana. Success as a new Latin American novel would mean selling its modest first edition of 8,000 copies in a region with 250 million people. Good regional sales would attract a mainstream publisher in Spain that would then import and publish the novel. International recognition would follow with translations into English, French, German, and Italian. To hit the jackpot in 1967 was to also receive one of the coveted literary awards of the Spanish language: the Biblioteca Breve, Rómulo Gallegos, Casa de las Américas, and Formentor.

This was the path taken by new Latin American novels of the 1960s such as Explosion in a Cathedral by Alejo Carpentier, The Time of the Hero by Mario Vargas Llosa, Hopscotch by Julio Cortázar, and The Death of Artemio Cruz by Carlos Fuentes. One Hundred Years of Solitude, of course, eclipsed these works on multiple fronts. Published in 44 languages, it remains the most translated literary work in Spanish after Don Quixote, and a survey among international writers ranks it as the novel that has most shaped world literature over the past three decades.

And yet it would be wrong to credit One Hundred Years of Solitude with starting a literary revolution in Latin America and beyond. Sudamericana published it when the new Latin American novel, by then popularly called the boom latinoamericano, had reached its peak in worldwide sales and influence. From 1961 onward, like a revived Homer, the almost blind Argentine writer Jorge Luis Borges toured the planet as a literary celebrity. Following in his footsteps were rising stars like José Donoso, Cortázar, Vargas Llosa, and Fuentes. The international triumph of the Latin American Boom came when the Nobel Prize in Literature was awarded to Miguel Ángel Asturias in 1967. One Hundred Years of Solitude could not have been published in a better year for the new Latin American novel. Until then, García Márquez and his work were practically invisible.

* * *

In the decades before it reached its zenith, the new Latin American novel vied for attention alongside other literary trends in the region, Spain, and internationally. Its primary competition in Latin America was indigenismo, which wanted to give voice to indigenous peoples and was supported by many writers from the 1920s onward, including a young Asturias and José María Arguedas, who wrote in Spanish and Quechua, a native language of the Andes.

In Spain during the 1950s and 1960s, writers embraced social realism, a style characterized by terse stories of tragic characters at the mercy of dire social conditions. Camilo José Cela and Miguel Delibes were among its key proponents. Latin Americans wanting a literary career in Spain had to comply with this style, one example being a young Vargas Llosa living in Madrid, where he first wrote social-realist short stories.

Internationally, Latin American writers saw themselves competing with the French nouveau roman or “new novel.” Supporters, including Jean-Paul Sartre, praised it as the “anti-novel.” For them, the goal of literature was not narrative storytelling, but to serve as a laboratory for stylistic experiments. The most astonishing of such experiments was George Perec’s 1969 novel A Void, written without ever using the letter “e,” the most common in the French language.

In 1967, the book market was finally ready, it seemed, for One Hundred Years of Solitude. By then, mainstream Latin American writers had grown tired of indigenismo, a style used by “provincials of folk obedience,” as Cortázar scoffed. A young generation of authors in Spain belittled the stories in social-realist novels as predictable and technically unoriginal. And in France, emerging writers (such as Michel Tournier in his 1967 novel Vendredi) called for a return to narrative storytelling as the appeal of the noveau roman waned.

Between 1967 and 1969, reviewers argued that One Hundred Years of Solitude overcame the limitations of these styles. Contrary to the localism of indigenismo, reviewers saw One Hundred Years of Solitude as a cosmopolitan story, one that “could correct the path of the modern novel,” according to the Latin American literary critic Ángel Rama. Unlike the succinct language of social realism, the prose of García Márquez was an “atmospheric purifier,” full of poetic and flamboyant language, as the Spanish writer Luis Izquierdo argued. And contrary to the formal experiments of the nouveau roman, his novel returned to “the narrative of imagination,” as the Catalan poet Pere Gimferrer explained. Upon the book’s translation to major languages, international reviewers acknowledged this, too. The Italian writer Natalia Ginzburg forcefully called One Hundred Years of Solitude “an alive novel,” assuaging contemporary fears that the form was in crisis.

And yet these and other reviewers also remarked that One Hundred Years of Solitude was not a revolutionary work, but an anachronistic and traditionalist one, whose opening sentence resembled the “Once upon a time” formula of folk tales. And rather than a serious novel, it was a “comic masterpiece,” as an anonymous Times Literary Supplement reviewer wrote in 1967. Early views on this novel were indeed different from the ones that followed. In 1989, Yale literary scholar Harold Bloom solemnly called it “the new Don Quixote” and the writer Francine Prose confessed in 2013 that “One Hundred Years of Solitude convinced me to drop out of Harvard graduate school.”

Nowadays scholars, critics, and general readers mainly praise the novel as “the best expression of magical realism.” By 1995, magical realism was seen as making its way into the works of major English-language authors such John Updike and Salman Rushdie and moreover presented as “an inextricable, ineluctable element of human existence,” according to the New York Times literary critic Michiko Kakutani. But in 1967, the term magical realism was uncommon, even in scholarly circles. During One Hundred Years of Solitude’s first decade or so, to make sense of this “unclassifiable work,” as a reviewer put it, readers opted for labeling it as a mixture of “fantasy and reality,” “a realist novel full of imagination,” “a curious case of mythical realism,” “suprarrealism,”or, as a critic for Le Monde called it, “the marvelous symbolic.”  

Now seen as a story that speaks to readers around the world, One Hundred Years of Solitude was originally received as a story about Latin America. The Harvard scholar Robert Kiely called it “a South American Genesis” in his review for the New York Times. Over the years, the novel grew to have “a texture of its own,” to use Updike’s words, and it became less a story about Latin America and more about mankind at large. William Kennedy wrote for the National Observer that it is “the first piece of literature since the Book of Genesis that should be required reading for the entire human race.” (Kennedy also interviewed García Márquez for a feature story, “The Yellow Trolley Car in Barcelona, and Other Visions,” published in The Atlantic in 1973.)

Perhaps even more surprisingly, respected writers and publishers were among the many and powerful detractors of this novel. Asturias declared that the text of One Hundred Years of Solitude plagiarized Balzac’s 1834 novel The Quest of the Absolute. The Mexican poet and Nobel recipient, Octavio Paz, called it “watery poetry.” The English writer Anthony Burgess claimed it could not be “compared with the genuinely literary explorations of Borges and [Vladimir] Nabokov.” Spain’s most influential literary publisher in the 1960s, Carlos Barral, not only refused to import the novel for publication, but he also later wrote “it was not the best novel of its time.” Indeed, entrenched criticism helps to make a literary work like One Hundred Years of Solitude more visible to new generations of readers and eventually contributes to its consecration.

With the help of its detractors, too, 50 years later the novel has fully entered popular culture. It continues to be read around the world, by celebrities such as Oprah Winfrey and Shakira, and by politicians such as Presidents Bill Clinton and Barack Obama, who called the book “one of my favorites from the time I was young.”

More recently, with the aid of ecologically minded readers and scholars, One Hundred Years of Solitude has unexpectedly gained renewed significance as awareness of climate change increases. After the explosion of the BP drilling rig Deepwater Horizon in 2010 in the Gulf of Mexico (one of the worst accidental environmental catastrophes in history), an environmental-policy advocate referred to the blowout as “tragic realism” and a U.S. journalist called it the “pig’s tail of the Petro-World.” What was the connection with One Hundred Years of Solitude? The explosion occurred at an oil and gas prospect named Macondo by a group of BP engineers two years earlier, so when Deepwater Horizon blew up, reality caught up with fiction. Some readers and scholars started to claim the spill revealed a prophecy similar to the one hidden in the Buendías manuscript:  a warning about the dangers of humans’ destruction of nature.

García Márquez lived to see the name of Macondo become part of a significant, if horrifying, part of earth’s geological history, but not to celebrate the 50th anniversary of his masterpiece: He passed away in 2014. But the anniversary of his best known novel will be celebrated globally. As part of the commemoration, the Harry Ransom Center in Austin, Texas, where García Márquez’s archives have been kept since 2015, has opened an online exhibit of unique materials. Among the contents will be the original typescript of the “very long and very complex novel” that did not die but attained immortality the day it was published.

Twin Peaks Returns to Terrify, Delight, and Confound
May 22nd, 2017, 06:37 PM

“I am dead, and yet I live,” Laura Palmer (Sheryl Lee) intones in her melancholic backward-speak, proving she’s still the soul of her show after all these years. Who could have put it better? Sunday’s return of Twin Peaks was everything fans might have expected: as confounding, horrifying, and furtive as its co-creator David Lynch’s (relatively) recent work, but not entirely lacking in the homespun charm of the original network-TV series. This new Twin Peaks might have a vicious box monster, an eyeless corpse, and a much nastier Dale Cooper (Kyle McLachlan) on board now. But Lynch and his co-creator Mark Frost haven’t let go of the show’s sense of humor, its soapy grandiosity, and their strange affection for the tormented souls that make up its ensemble.

When the duo brought the original Twin Peaks to ABC in 1990, they seemed largely uninterested in distinguishing between episodes, unfolding their serialized horror-soap in increasingly inscrutable fashion until the ratings plummeted too low. Though original programming on premium-TV networks barely existed then, it’s now the default home for the kind of weird, auteur-driven television Twin Peaks helped pioneer. And today, Lynch and Frost seem even less concerned about keeping audiences on board with a propulsive or linear narrative.

As such, it’s a challenge to summarize the actual plot of “Parts 1 & 2,” which aired Sunday night, the first two in an 18-hour series. These episodes scattered a lot of fascinating imagery, disconnected story ideas, and inter-dimensional nightmare antics in front of its audience; it’s up to viewers to try and put the pieces together, or (my preferred method) simply soak in every bizarre tableau with glee. It’s been more than 10 years since Lynch has come out with a major work (2006’s Inland Empire), and judging from what has aired so far, Twin Peaks (subtitled The Return) is a worthy new entry in his canon.

To recap: Some 25 years ago (within the timeline of the show), FBI Agent Dale Cooper journeyed to Twin Peaks, Washington, to investigate the murder of local prom queen Laura Palmer. In doing so, he both fell in love with the town and began digging into its nefarious, drug- and sex-fueled underbelly. He also encountered a mystical netherworld, the red-curtained Black Lodge, that lay at the core of Twin Peaks’ pain and suffering. In the series finale, he was trapped there while the demonic spirit “Bob” (Laura’s true murderer) took charge of his body.

In Twin Peaks: The Return, this possessed version of Dale, referred to as “Mr. C,” is tooling around the country with a lush mane of hair and inky-black eyes. He appears, in some way, to be responsible for a new gruesome murder, this time occurring in South Dakota ostensibly at the hands of a very confused, distraught school principal, Bill Hastings (Matthew Lillard). Though Frank Silva, the actor who memorably played Bob on Twin Peaks, sadly died many years ago, McLachlan does a wonderful job summoning his terrifying affect; his Mr. C, who coldly dispatches more than one victim in the first two episodes, is a tour de force.

The real Dale remains trapped in the Black Lodge, speaking (at various times) to the dearly departed Laura, the helpful spirits Mike (Al Strobel) and the Giant (Carel Struycken), and the “Man From Another Place,” once played by Michael J. Anderson, who has been recast as a massive talking tree topped with a gelatin brain. That image, along with another of the first episode’s mysterious monsters (a murderous humanoid figure), seemed right out of Lynch’s earliest pieces of filmmaking—his frightening short films and his debut feature Eraserhead.

But Twin Peaks: The Return often reminded me of Mulholland Dr., Lynch’s later effort at TV storytelling (which, once it was rejected by ABC, was retooled into a 2001 feature). The early portions of that film, much of which were intended to set up a larger television narrative, were hilariously byzantine. As the protagonist Betty (Naomi Watts) tried to make her entry to Hollywood, Mulholland Dr. delved into meetings between various unknown players, be they cowboys or taciturn studio executives. Each played out in circular, stilted dialogue that seemed to amount to very little on the surface, but were still rife with unease, suggesting a dark, bureaucratic underpinning to all of the world’s nightmares.

Twin Peaks: The Return was filled with similarly portentous moments. What exactly is the enigmatic glass box being filmed in New York City by several cameras and attended to by a nonplussed grad student (Ben Rosenfeld)? Who is Mr. Todd (Patrick Fischler), the Las Vegas mogul who ominously warns his assistant Roger against doing business with his unseen benefactor? Why did the murder committed by Bill Hastings (Lillard’s character) involve two dead bodies and multiple extra-marital affairs? And just what did the Log Lady (a powerful cameo from the now-deceased Catherine Coulson) want to communicate to Twin Peaks’ deputy chief of police, the ever-reliable Hawk (Michael Horse)?

Plenty of things will come to light over the next 16 hours; likely, plenty more won’t. But while Twin Peaks’ return was, in general, more given to the open brutality and daring surrealism of Lynch’s film efforts, it did often recall its more straightforwardly melodramatic forebear. That long, dragged-out revelation of the crime scene in South Dakota was delightful, from the dog-owning, busybody neighbor bugging the cops to the flat confession of forbidden love from Bill’s wife Phyllis (played by Cornelia Guest, a New York socialite) to her lawyer. Lynch’s fondness for the random bystanders in his work can sometimes feel like a twisted take on a Far Side cartoon, but that doesn’t make it any less fun to watch.

Twin Peaks remains the nightmare fuel it always has been. The sight of an oily-black ghost in the prison cell next to Bill’s was truly alarming; the same goes for Dale’s return to the real world, again via the glass box (after a space-time rupture in the Black Lodge’s iconic floor). In the intervening decades, Lynch, and thus Twin Peaks, has only gotten more abrasive. But that closing scene at the Bang Bang Bar, where an older (and just as stunning) Shelly (Mädchen Amick) and James (James Marshall) locked eyes across the room, still had me swooning over memories of old. Laura Palmer may be dead, and yet—as she told us—she lives.

Celine Dion Saved the Billboard Music Awards
May 22nd, 2017, 06:37 PM

The Billboard Music Awards is where you can have all your worst suspicions about today’s pop music confirmed. The biggest current stars seemed disposable as they performed in Las Vegas Sunday night: Drake let waterworks replace showmanship as he rapped in the middle of the Bellagio’s fountain; Lorde took the concept of a fake karaoke bar to its least exciting conclusion; Bruno Mars and Ed Sheeran gave new meaning to “phoning it in” by just having overseas concerts simulcast on ABC. The worst set—Drew Taggart of the Chainsmokers mumbling millennial Mad Libs about premature nostalgia while climbing a staircase and then sitting down—felt like a satire on the overrating of white male mediocrity. Still, hosts Ludacris and Vanessa Hudgens pantomimed breathlessness throughout the night. Each muddled performance, we were told, was “epic.”

One song actually was epic, though. And befitting various pre-made narratives about national malaise, it came from a Canadian balladeer rehashing an adult-contemporary hit of two decades ago. Celine Dion’s “My Heart Will Go On” saved the Billboard Music Awards, and watching it might now save your Monday. (The full version is 1 hour and 17 minutes into ABC’s telecast.)

Dion was once a divisive cultural figure, seeming to embody all of pop’s sentimental excesses in one quivering human. These days, though, she’s taken on elder-stateswoman vibes as a generation of stars raised on her music have loudly proclaimed her influence and critics have given her new respect. There’s also the matter of personal tragedy. In January of 2016, her husband René Angélil died from cancer. Just months later she was at the Billboard Music Awards, powerfully belting Queen’s “The Show Must Go On,” launching her as a new icon of perseverance.

She returned to the Billboard stage on Sunday for the 20th anniversary of Titanic, and the occasion might suggest the main value of her performance would be time travel. Clips of young Leonard DiCaprio and Kate Winslet indeed played behind her—yes, including the “king of the world” moment. The now-unfashionable production choices of the original recording (those dewy keyboards!) remained in this arrangement, somewhat endearingly. But the startling thing about the performance was how arresting it was on its own terms, in the present.

Visually, we got pop opulence so well-executed that it transcends kitsch. Dion stood behind and then under a jeweled curtain hanging from an enormous chandelier (or jellyfish). Her dress from Stéphane Rolland Haute Couture was a white winged sculpture, strikingly asymmetric in its evocation of the angelic. The woodwind player performed on a darkened stage across the room, allowing for high-drama camera swoops. All of which is, of course, preposterous—as preposterous as the song itself.

Chris Pizzello / AP

Dion mostly stood still, her vocals doing more than execute on the song’s original power. The overplayed smash somehow sounded new, with her injecting into it greater quantities of her oft-imitated tics: those Québécois inflections and yodel-y way with syllables. The lyrics about everlasting love, in the context of her recent familial loss, necessarily took on new meaning (Dionheads know it was Angélil who pushed her to record the song in the first place). For the grand final chorus, the lighting changed and the room was bathed with disco-ball sparkles; Dion was on the verge of tears.

But Hudgens actually was in tears by the end. Ludacris seemed, for once in the ceremony, truly wowed as he asked for an extra round of applause. Backstage, Dion proceeded to mint another memorable moment: As Cher, recipient of the same Icon Award that Dion won last year, performed “Believe” onstage, Dion sang along in front of a crowd of reporters. In instantly viral videos of the moment, you can see the wings of her dress wiggle as she bops along to another diva-goddess giving another showcase of resilience. “My Heart Will Go On” played as solemn and this backstage vignette did not, but both stood out amid Sunday night’s din for simply seeming genuine.

The Leftovers: Time to Dive
May 22nd, 2017, 06:37 PM

Each week following episodes of the third and final season of The Leftovers, Sophie Gilbert and Spencer Kornhaber will discuss HBO’s drama about the aftermath of 2 percent of the world’s population suddenly vanishing.


Sophie Gilbert: One of the most intriguing choices The Leftovers made early on was to frame the show around the Garveys, a family that was technically untouched by the Great Departure, but was nevertheless ruptured by the events of October 14. As was explained in the pilot, both Laurie and Tommy abandoned the unit after the event—Tommy to become a disciple of the dubious healer Holy Wayne, and Laurie to join the Guilty Remnant, leaving behind a husband and a teenaged daughter who both desperately needed her. As the first and second seasons played out, we got inklings of why Laurie had been so discombobulated by the Departure. But it wasn’t until this episode, “Certified,” possibly the most heartrending of the third season, that Laurie’s pain became totally clear.

As a therapist, Laurie’s whole career is based on understanding, empathizing with, and helping lessen the pain of others. So the opening scene, which flashed back to the woman who lost her child in the very first moments of the pilot (whom Kevin later encountered in a bar), illuminated how destabilized Laurie was by this person, and by all the people, whom she was completely unable to help. “Tell me what to fucking do,” the patient entreated. “I don’t know,” Laurie finally said, after a few minutes in which her eyes seemed to harden and go dead. So she took handfuls of pills, wrote a suicide note, lay down to die, thought better of it, made herself throw up, and then sought out the people who seemed to have definitive guidelines for responding to this completely inexplicable event. (Dress all in white, smoke cigarettes, get nihilist, and make people remember.)

Since season one, of course, Laurie’s recovered some of her pre-Departure poise, with interludes in which she counseled former GR members (unsuccessfully, as the tragic murder-suicide of Susan in “Off Ramp” revealed), had a psychotic pitch meeting with a publisher, and ended up in Jarden, where she married John Murphy and seemed to find a measure of peace in helping strangers by pretending to talk to their dead relatives (not, it’s worth emphasizing, helping strangers who’d lost loved ones in the Departure). But as “Certified” showed, she’s still unmoored by the pain of others. By John, who can’t accept that his daughter is really dead. And most of all by Nora. Laurie’s face crumpled when she had to say goodbye, possibly forever, to the woman she’d spent the whole episode squabbling with, who’d given her a black eye and abandoned the same man whom Laurie had left behind seven years ago. In that moment by the shore, Laurie seemed to feel all the agony of all the people on earth who’d lost people, and to find that agony impossible to bear.

I’ll never complain about a Laurie-centric episode, partly because Amy Brenneman is so spectacular to watch, and partly because she gets some of the wryest lines in the show. “Wow, so everybody wants something,” she deduced after arriving at the ranch. “A brain, a heart, courage.” She also summed up Kevin Sr.’s plan: “You want to drown Kevin so he can go to this place where the dead people are, and while he’s there he’s gonna learn a song from one of the dead people that he can bring back to you, so you can sing it and stop a biblical flood that’s gonna happen tomorrow.” Even to Pop, it sounded nuts. “Are we lunatics?” he asked her. “Yeah, it sounds crazy,” Laurie said. “But these are crazy times, huh?”

Laurie’s composure in this episode reminded me of a great nihilist shruggie. Yes, everyone is acting totally crazy, and no, there’s nothing she can do about it. So she just sat by on the sidelines as Nora seemed intent on going into what Laurie described as a “suicide machine,” and as Kevin Jr. seemed intent on being drowned, and as her husband and her former father-in-law all seemed intent on going along with the plan. (The fact that Laurie couldn’t or wouldn’t help Michael, who called her behind his father’s back to try and seek her help was particularly tragic.) Even when John begged her to intervene, telling Laurie that they could still go home and forget about all this craziness, it didn’t force her hand. “I think you’ve gotta see it through, John,” she said. “You’re so close.” But close to what?

So, in the end, Laurie gave up on her life’s work of advocating rational behavior—of being the person in Nora’s childhood story who crushed the beach ball at the baseball game before it went onto the pitch and resulted in “fucking chaos.” More than that, she seemed to give up on everything. The final few moments—in which she inhaled four times before launching herself into the ocean, following the discussion with Nora about a diving “accident” being the kindest way to kill yourself—were brutal. It isn’t entirely clear yet whether she’ll emerge from the water, and whether the deus ex cellphone call from Jill might change her mind. But her final acknowledgement to Kevin, “We’re all gone,” seemed to imply acceptance, and not the good kind.

Spencer, I’ve been so obsessed with Laurie that I’ve barely mentioned Nora, whose pain in this episode was jagged and raw. Or Matt, who did maybe the most selfless thing he’s ever done in deciding to stay with Nora as she goes into the machine. Or Officer Koalafart. What did you make of “Certified”? Is it weird that the lyrics to “1-800 Suicide” by Gravediggaz, which accompanied the opening credits, mention a lion’s den?


Spencer Kornhaber: As The Leftovers has so often done, this episode sent me down an existentialism Wikipedia hole. Relevant: Albert Camus, the absurdist author whose work was perused by Tommy Garvey in series premiere, once wrote that “there is but one truly serious philosophical problem and that is suicide.” If you suspect life has no intrinsic meaning, what does it matter if you opt out? Absurdism saw virtue in man rejecting suicide and soldiering on “without relinquishing any of his certainty, without a future, without hope, without illusions,” as Jean-Paul Sartre put it, describing the protagonist of Camus’s The Stranger. But other great thinkers saw the matter differently. The nihilist Nietzsche: “The thought of suicide is a great consolation.”

Laurie Garvey often presents herself as an absurdist, accepting life’s randomness with poise. But deep down she is (as you’ve indicated, Sophie) a nihilist. Anyone who joins the Guilty Remnant would have to be. In the gut-churning opening flashback, we saw that her donning of the cult’s white clothes was, in fact, a suicidal act—a way to address her darkest impulse without actually dying. We also got a sense of how her inner torment stems from the same clarity of perception that once made her a successful psychotherapist. The patient on her couch experienced loss so senseless that it indicts existence itself, and Laurie can’t fabulate a silver lining or mystical explanation like so many other characters do. She can only see reality, and reality is bleak.

Throughout this episode, Laurie played helper to people who, by contrast, have found fanciful sources of meaning—whether by becoming disciples of Kevin or, in Nora’s case, by developing a feverish need for closure at any cost. From moment to moment, the non-Laurie characters express more acute anguish than the cool, calm Laurie ever does. But they also have access to the thrill of superstition and a sense of purpose. She knows the value of those things—hence her palm-reading startup—but isn’t able to give into them. As her friends and family disappear into metaphysical madness while the sky ominously darkens ahead of the big anniversary, she’s left with nowhere to go but under the sea.

Sophie, you used exactly the right word for tonight’s final scene: “brutal.” The camera lingered for about 15 seconds on the water after Laurie splashed in, and director Carl Franklin had to know viewers would be aching for her to bob back up in the last instant. But nope—she’s still down there, and we don’t know if it’s for good. When she symbolically committed suicide the first time, by joining the GR, it was family that pulled her back. We are now left desperately hoping that family will do the same again: Existence may be pointless in Laurie’s view, but she’s got a charming son and daughter who can liven it up at least.

The theme of suicide—and the slew of goodbyes between Laurie and other characters—made this one of the heaviest hours of The Leftovers, and that’s really saying something. Even the humor was ultra-dark. I yelped when Kevin Sr., in the background of Laurie’s shot, matter-of-factly thwacked the cop in the head. Nora seemed completely unhinged throughout, and her scuba proposition was part edgy joke and part truly evil psychological violence. Chrisopher Eccelston continued to mine queasy physical comedy from Matt’s cancer, this time by shoving tissues up his bloody nostrils while mentioning his incinerated parents. Laurie’s drugging of the Last Supper with dog medicine made for excellently gonzo cinema, with the scene progressively becoming more surreal as the would-be apostles began to lose their faculties.

As we near the end of the show, the storytelling is having to accomplish a lot— maybe too much. In particular, the dynamic between Laurie and her new husband tripped me up. John Murphy has been mostly sidelined this season, so we didn’t really understand why he’d switch from being his wife’s defender on the lion ferry to ditching her in Melbourne to head to the ranch. Their somewhat strained heart-to-heart outside of Grace’s house sounded like the work of a writers’ room hustling to address loose ends—hence the explanation of Erika’s departure, of John and Laurie’s first date, of why they shred their cash, and what the deal is with the half-finished church and boat. It also transparently felt like Kevin Carroll being given his last big scene.

The talk between Laurie and Kevin landed much better for me. These ex-spouses’ histories have been well-developed over three seasons—and yet they’ve had precious few on-screen moments of just chatting. With them both aware it might be the last time they ever see each other, they made confessions that escalated from the comical (“rest in peace, Mr. Fuzzy”—err, Funny) to the devastating (Kevin didn’t have any clue about the one Departure from his immediate family). We were also given a helpful glimpse into two very different kinds of suicidal mindsets. Kevin needs to die—or at least temporarily die—to feel “alive.” But Laurie just wants to be done. Did you catch the double entendre in her final words to him? Explaining why Kevin can keep the lighter she just tussled with Nora to retain, she says she doesn’t smoke anymore: “I quit.”