Technology | The Atlantic
A Catfishing With a Happy Ending
October 19th, 2017, 10:30 AM

Emma Perrier spent the summer of 2015 mending a broken heart, after a recent breakup. By September, the restaurant manager had grown tired of watching The Notebook alone in her apartment in Twickenham, a leafy suburb southwest of London, and decided it was time to get back out there. Despite the horror stories she’d heard about online dating, Emma, 33, downloaded a matchmaking app called Zoosk. The second “o” in the Zoosk logo looks like a diamond engagement ring, which suggested that its 38 million members were seeking more than the one-night stands offered by apps like Tinder.

She snapped the three selfies the app required to “verify her identity.” Emma, who is from a volcanic city near the French Alps, not far from the source of Perrier mineral water, is petite, and brunette. She found it difficult to meet men, especially as she avoided pubs and nightclubs, and worked such long hours at a coffee shop in the city’s financial district that she met only stockbrokers, who were mostly looking for cappuccinos, not love.

It was a customer who had caused Emma’s heartache, two months earlier. Connor was one of London’s dashing “city boys,” and 11 years her junior. He had telephoned her at work to ask her on a date, which turned into an eight-month romance. They went night-fishing for carp near his parents’ home in Kent, where they sat holding hands in the darkness, their lines dangling in the water. One day at the train station, Connor told her it wasn’t working; he liked nightclubs more than he liked being in a relationship. When she protested, Connor said that he’d never loved her.

To raise her spirits, Emma huffed and puffed her way through a high-energy barbell class called Bodypump, four times a week. Though she now felt prepared to join the 91 million people worldwide who use dating apps, deep down she did not believe that computers were an instrument of fate. “I’m a romantic,” Emma told me, two years after the internet turned her life upside down. “I love to love,” she said, in a thick French accent. “And I want to be loved too.”

As soon as her dating profile went live, Emma’s phone started to bleep and whistle with interest from strangers. The app allowed her to gaze at a vast assortment of suitors like cakes in a coffee-shop window, but not interact with them until she subscribed. That evening, a private message arrived in her inbox. It was from a dark-haired Italian named Ronaldo “Ronnie” Scicluna, who looked to Emma like a high-school crush. But the text was “floue,” Emma told me, not knowing the English word for “blurred.” The app was holding Ronnie’s message ransom.

That night, Emma FaceTimed her sister and showed her Ronnie’s photos: “Oh my God, look at the guy!” she giggled, as they swiped through his profile pictures. He was boyish yet mysterious, like the kind of dangersome male model who steers sailboats through cologne commercials. But according to his profile, Ronnie was a 34-year-old electrician in England’s West Midlands, just 100 miles away.

Gaëlle, Emma’s twin, lived in France and was married with an 11-year-old daughter. The sisters had gossiped on daily video calls since Emma emigrated to the United Kingdom five years earlier. Emma had to learn English “chop-chop”—as Londoners say—and now she too was ready to meet someone special. Ronnie seemed exciting, so she paid the £25 ($34) subscription to Zoosk.

Ronnie’s message materialized. It said: “You look beautiful.”

A rally followed. Emma discovered that she and Ronnie were two lonely Europeans working blue-collar jobs in England. Charming Ronnie attempted a little French, but when Emma wrote to him in Italian, she was surprised that he didn’t speak it. His mother was English, Ronnie explained, his Italian father spoke English too, “except when he swears.”

Their conversation moved from Zoosk onto WhatsApp, a free messaging app. Each morning on the train to work, Emma sat glued to her iPhone. She wondered how a guy like him was interested in her. “I’m very natural,” Emma said. “I mean, I’m nothing. I’m very simple you know ... so I was flattered.” In her favorite photograph, Ronnie wore a leather jacket that made him look like a pop star. As a teenager, Emma had obsessed over the British boy band Take That. But Ronnie was the opposite of a celebrity; he was down-to-earth.

A compilation of two photos: a woman on the left and a man on the right
Pictures from Emma and “Ronnie’s” profiles on Zoosk; “Ronnie” is pictured wearing Emma’s favorite leather jacket (Instagram)

“You could easily have picked someone else,” Ronnie told her one day.

“No. You’re the only one I wanted to talk to ... I paid because of you,” she replied.

“As soon as I saw your picture I wanted you,” he wrote.

“Makes me happy to know that,” Emma replied.

When four red heart emojis appeared on her screen, Emma was thrilled. Unlike her ex-boyfriend, Ronnie seemed mature and attentive. Ronnie was easy on the eyes, funny, and caring, but there was one problem: He did not exist.

* * *

Ronaldo Scicluna was a fictional character created by Alan Stanley, a short, balding, 53-year-old shop fitter—a decorator of retail stores. Alan lived alone in Stratford-upon-Avon, the birthplace of William Shakespeare. Like one of the Bard’s shape-shifting characters, Alan used a disguise to fool women into romance, and to prevent himself from getting hurt. His alter ego “Ronnie” was a ladies’ man, charming, and attractive—everything Alan was not. “I was in a pretty lonely place,” he told me during an emotional interview. “I wasn’t feeling the most attractive of people, I might say. You know, I always struggled with self-confidence and ... I was going through a messy separation and I was just feeling like I needed somebody to talk to.”

When his marriage of 22 years failed, Alan, who has an adult daughter, was devastated and found himself uninterested in the opposite sex. “I’d just had enough,” he explained. For almost a year, he allowed his decorating work to consume him, but boredom set in. Alan wanted to “mix” with new people, he said, but feared public rejection in his close-knit town. Then one day he noticed the online-dating service Zoosk.

Alan elected to bypass the company’s selfie-based verification system, a spokesperson for Zoosk told me, following an internal investigation. He admitted using photographs of a random male model from Google that he had stolen. “I’m always nervous about posting personal images of myself,” he explained. “I just don’t like pictures of me. It goes back a long way, to be honest.” Emma’s profile was the first he saw. He was captivated.

Alan had done it before, at least five times, he admits. He’d become online pen pals with single women from all over the world, but avoided video calls and meetings. He found the thrill of the chase electrifying, with none of the awkward stuff like first dates. Emma was just another mark, and their flirty exchanges were innocent fun, he said. “Catfishing is prevalent across the internet,” he told me, “Everybody does catfishing.”

Catfishing was added to the Merriam-Webster dictionary in 2014. It refers to a person who creates a fake social-media profile, usually with the goal of making a romantic connection. The term was coined during a 2010 documentary, Catfish, when a subject told a story about the journey of live cod from the United States to China. Apparently, to prevent the cod from becoming lazy and their flesh turning to mush, seafood suppliers add to the tanks their natural enemy, the catfish. A predator creates excitement.

Alan was right. Online, catfishing was growing in popularity. “Now you don’t need the imagination of a Tolstoy or Dickens to create a totally believable but fictional identity,” said the cyber-psychologist Mary Aiken, author of The Cyber Effect, “It’s a matter of cut and paste.” The results can be devastating. In 2006, a 13-year-old girl in Missouri was duped into an online relationship with a fake teenage boy created by neighbors. After their online romance soured, Megan Meier committed suicide. By June of this year, catfishing was so prevalent that Facebook announced it is piloting new tools to prevent people from stealing others’ profile pictures, like Alan did.

His flirting with Emma soon progressed from small talk to in-jokes, pet names, and late-night telephone calls. To Emma, his lilting West Midlands accent somehow fit perfectly with the images of the model. In October of 2015, she wrote how happy she had become since “meeting” him.

“Are you not usually happy, stinky?” he asked.

“I am,” she said, “but you changed something.”

They both agreed to delete the dating app. Emma constantly asked for a physical date, but was crestfallen when Ronnie made excuses. This had happened before. Alan knew how to prolong the relationship with a combination of evasion and false promises. He told Emma that decorating new shops took him all over Europe. Any free time was spent drinking whiskey with his father, or on vacation at his parents’ villa in Spain, he said. Maybe one day she could stay in “bedroom three.” Emma just wanted a local dinner—they lived only 100 miles apart.

“It’s hard to keep everyone happy,” Ronnie complained. “Dad loves me working and wants me to keep doing better. Mum wants me to quit. She worries about me. My health. Stress. Dad thinks I handle it well.”

“I think what you need is a [girlfriend] to look after you,” said Emma, before he changed the subject.

“Do you want to know why I started online dating?” she asked him one night. “Because I wanted to ... meet that someone and to start something with that someone ... not to have a broken heart ... which is even more painful when you have never met someone.”

“Me too,” said Ronnie. “We both want the same thing.”

“Give me a date then,” Emma wrote. “I will suit your availability.”

She waited for his reply.

“I don’t think you realize how difficult it is for me to get time off,” he wrote.

“Just a dinner to start with,” Emma begged. “I can do the travel ... then if the connection is really there we will find a way.”

“Do you think it will be there?” he asked.

“I have never been so sure.”

“Do you have faith in us?” he asked.

“It could work perfectly well,” Emma wrote.

“And I love you,” he wrote.

“And I love you too,” she replied.

* * *

Little scientific research exists about catfishing, but experts say that victims tend to be lonely, vulnerable, or missing something in their lives. John Suler, a clinical psychologist and author of Psychology of the Digital Age, said that victims without a real-world social network can overlook what is too good to be true: “It always helps to have friends and family reality-check relationships online,” he said. But Emma had few close friends or family in London. And Emma was looking for love.

Emma met her first boyfriend at age 15. When their high-school romance ended a decade later, she ran away, high into the French Alps, to find seasonal work. She did not find love there, and decided to keep running, this time to England, where she had dreamed of living since visiting as a child. When she arrived, aged 28, there were 127,601 French-born residents in London, and by 2015 that number had doubled, making it the sixth-biggest French city, according to London’s mayor. But the language barrier nearly made Emma quit after two months. “It’s not like the same as you listening to that song in your bedroom when you’re 16,” she said.

“Ronaldo Scicluna” claimed to be a builder living in Stratford-upon-Avon, England. (Handout)

She loved talking to Ronnie, whose conversations were full of construction-site bonhomie, British slang, and flirtation. One day, she received a black-and-white modeling photograph of him wearing a tiny pair of Speedos. Emma fired back emojis with laughing faces stained with tears of joy.

“I love that picture thank you,” she replied, “I saved it.”

Alan, who is a fitness fanatic, was now spending his mornings on long-distance runs. Decades of manual labor had kept him fit, but he was resentful about losing his hair at a young age. “In my 30s it started falling out,” he said. “I was exactly like my dad.”

To him, Emma had become not just a friendly voice on the phone, but a project. When he discovered that Emma spent three hours a day commuting to work, Alan encouraged her to find a local job. “I was on her journey in life, trying to guide her,” Alan said.

By January of 2016, Emma was thrilled to receive a job offer three miles from her home at an Italian chain restaurant. As the new assistant manager of Zizzi in Richmond, she managed a team of Poles, Spaniards, and Greeks (there are no real Italians in this story). When Emma boasted about her “long-distance” love, the busboys asked why they’d never met. Emma told them he was “extremely busy.”

Alan was running out of excuses. “It was eating at me because I knew the longer it went on, the more problematic it would become in the long term,” he said. Like Malvolio in Twelfth Night, Alan had donned a ludicrous disguise to win the affections of his Olivia. And in a world where Alan felt ugly and invisible to the opposite sex, Emma showered him in “adoration.” In his mind, Alan minimized his lie: “Everything I told her about me, apart from who I was, and the age, was true.”

One night, after the last customers left Zizzi, Emma closed the restaurant with a popular, baby-faced Spanish waiter named Abraham. As they shut down the huge pizza oven, and packed away the cutlery, Emma revealed how she longed to meet her mysterious boyfriend. Abraham listened for a while, then turned to his manager and said: “But Emma, the guy doesn’t want to meet you ... maybe it’s not even him.”

Emma insisted that they’d talked on the phone.

Abraham said her boyfriend was “probably an old man.”

Then he said he’d heard about an app that could help.

“He could be a psycho,” he added.

Emma was hurt and confused. After Abraham left, she found herself alone in the restaurant. Looking through the window she watched the happy couples walking along the black cobbles of King Street. She longed for the day when Ronnie would appear at Zizzi, sweep her off her feet, and prove them all wrong.

By the spring of 2016, Emma’s family recommended that she cut off all communications with Ronnie. He had refused to meet her after six months, they said. “I didn’t want to listen to them,” Emma said. But one evening after work, she laid on her bed and downloaded to her iPad an app called Reverse Image Search. It is one of many apps that crawls the internet to find the original source of a profile picture.

“Believe me I was scared to use it for the first time,” Emma said. She uploaded the photograph of Ronnie wearing his leather jacket. The results arrived in seconds: The man in the photographs was a model and actor from Turkey, called Adem Guzel. Emma was confused. She found his model-management website, an official Twitter account, and his Facebook. Adem’s closest connection to the United Kingdom was that he had studied at the Gaiety School of Acting in the nearby Republic of Ireland.

“Do you have anything to tell me about Adem Guzel?” she wrote in a text message.

“It is me,” Alan replied, thinking fast. Those were his modeling pictures, he said. He’d once used another name.

“It was a long time ago,” he promised.

Given the opportunity, Alan couldn’t tell the truth. “I would have lost someone that I really treasured,” he told me. But Emma demanded that he reveal himself. FaceTime was “for teenagers,” he said. When she insisted, he yelled: “Stop! Don’t ask me anymore!”

But Emma still wanted to believe in the fantasy, not the truth.

“I couldn’t believe it because, you know ... when you talk to someone every day, and you share your life ... he was my confidente.”

And why would somebody claim to be someone else online?

Julie Albright, a digital sociologist at the University of Southern California, says catfishing can be addictive: “Suddenly finding success with romantic partners online is exciting, and in fact intoxicating for certain people,” she said, adding that catfish often target more than one victim: “Putting several hooks in the water and getting several relationships going is the way to hedge your bets.”

In August of 2016, nearly a year after his and Emma’s relationship began, Alan had computer troubles. He bought a new one, but set it up using his personal email address. When he sent Emma a message, it sounded like Ronnie, but the email address said “Alan Stanley.”

It was his first mistake.

“I lied,” Alan told me. “I said, no, I bought this computer from somebody else and they haven’t changed it yet.”

Emma was now overwhelmed with doubts.

During that summer of 2016, Emma allowed her long-distance relationship to continue as she started what she proudly calls “my investigation.” One day Ronnie sent her a photograph from an aquarium, the fish from Finding Nemo. It was either a False Percula clownfish or a True Percula clownfish—only a saltwater aquarist could tell the difference—but Emma was more interested in uploading it to her app. “This Nemo sent me to TripAdvisor,” she said. It illustrated a review written by “Alan S.”

“I knew,” Emma told me. She typed Alan’s email address into Google.

I asked what she found.

“Everything, everything,” she sighed. “His Twitter accounts. Where I’ve seen his face.”

“It was devastating and I felt sick,” she said. “You have no idea how much I’ve been hurt inside.”

Alan was in early-morning traffic when his cellphone rang.

“Is your real name Alan?” Emma asked.

“No.” he replied.

“But it is, it is, it is!” Emma said, sobbing. Alan accused her of having trust issues.

“Don’t talk to me about trust, Alan Stanley!” Emma yelled. The call, and Alan’s masquerade, was over.

Adem Guzel (left) is a model and actor from Turkey who had his identity stolen by Alan Stanley (right). (Handout / Facebook)

From a quiet corner of a half-decorated shop, Alan called Emma back. “I could not be any more apologetic,” he told me. “I told her everything.” Emma told him she felt like a fool. They both cried. It was, Alan said, a “big error of judgment, the worst and biggest mistake of my life.” But even in his telling of “the truth,” Alan told Emma he was 50, shaving off a few years.

Emma had questions. Was he a pervert? Alan sent her a real photograph of himself, wrinkles and all. “It might sound cruel what I’m going to say,” Emma told me, “but I carried on talking with him, after I knew who he was, only because I wanted to know why he did that to me,” Emma said. “I’m 34 at the time, but maybe another girl, when she finds out, she could maybe go too far, maybe kill herself.” After the big reveal, Emma asked Alan if he wanted to meet her. “I really wanted to go, to end the story,” she said. But was Alan dangerous?

Emma decided that she needed to protect others from his scam. On September 16, 2016, she wrote a Facebook message to the Turkish model:

Hello Adem, we don’t know each other but a year ago I met a guy online and that man is using your picture and pretends he is you under another name. I wasn’t sure if getting in touch with you was a good idea but I needed you to know, kind regards, Emma.

* * *

Adem Guzel nearly ignored the message. The shy, 35-year-old model woke up in the Bohemian district of Cihangir, near Istanbul’s famous Taksim Square, suffering from a cold. This was not the first message he had received of this nature. Adem poured a cup of tea in the kitchen of his aparthotel, a type of bed-and-breakfast that had once been popular with travelers, before political instability and terrorist attacks killed off Turkish tourism. He drew a hot bath, undressed, and sank into the water. Maybe it was the head cold, Adem thought, but it was like an invisible person was yelling in his ear: “Pick up the phone!”

Adem toweled off and found his iPhone. Something about the sincerity of Emma’s message stuck in his mind. He wrote back in broken English. “And the conversation just started,” Adem told me, in a gruff, Turkish voice. When he heard how Alan had tricked Emma, Adem was furious. Emma asked him if he wanted to video call.

Emma was on a bus in Richmond when she read the message. She dashed home and showered, with a strange flutter in her stomach. When Adem’s face appeared on her iPhone, Emma was hysterical. “It was crazy,” she said. “I wasn’t sure it was him, I was always in doubt.” But there he was, talking, smiling, nervously running his fingers through his hair. “I never do FaceTiming,” Adem said. “But somehow I wanted to do it with her.”

“You are so real,” Emma said, crying. “You really exist!”

Emma had questions. In English, their shared second language, Adem explained that he had grown up in a coastal Turkish village, then moved to Istanbul and enjoyed a prosperous modeling career. But his plans to become a television actor had stalled when he refused to enter a Turkish reality show, which he said operated on a “casting-couch” basis. Instead, Adem moved into a friend’s deserted aparthotel as a temporary manager.

As they talked, Emma summoned her sister on FaceTime, and showed the iPad to her iPhone. Gaëlle and the Turkish model waved at each other from opposite sides of Europe. After the call, Adem and Emma exchanged text messages, but Adem soon packed his bags and returned to the village whence he came. Şarköy, pop. 17,000, had the cellphone signal of a small Turkish village, and their conversation fizzled out.

* * *

On Friday, November 11, 2016, Alan Stanley stepped off a train at London’s Paddington Station. He strolled to a nearby row of white-pillared Georgian townhouses and checked into the Arbor, a swanky, boutique hotel with views of Hyde Park. That evening, Alan walked out of his hotel, and into the nearby London Hilton, where Emma was nervously waiting in the lounge. She said she needed closure, and to see the truth with her own eyes. Alan “needed to apologize to her face-to-face,” he said.

His face was red with shame. “The hug went on for about a minute,” he told me, “I was just, like, quite tearful.” Emma pulled up an armchair and they sat uneasily side-by-side, making small talk. Then, Alan said he was sorry.

He said he did it to escape the agony of loneliness. When Emma studied him, she saw a man just two years younger than her own father.

Emma and Alan left the Hilton for some fresh air, and strolled along a tree-lined pathway known as Lover’s Walk. In Alan’s telling, they passed Hyde Park’s “Winter Wonderland” where couples were riding a Ferris wheel or whizzing around an ice-skating rink. The walk—20,000 steps, according to his iPhone’s health app—was one of the longest and best of his life.

“We talk, talk, talk,” Emma said. She asked him about drinking whiskey with his father. Was even that true? “He said his dad passed away a few years ago.”

While Alan considered the evening a date, Emma’s memory of the walk was quite the opposite of romance. The park was “empty” she said. Her only memory was pausing at a memorial to the 52 victims of London’s July 2005 bombings.

“It was a perfect night,” Alan said. “She paid for dinner that evening. Italian restaurant in Paddington.”

Alan even insinuated that Emma had stayed the night at his hotel. “As a gentleman I’m very reluctant to talk about this side of it,” he said. Emma flatly denied it.

“I was pleased I met him obviously,” Emma said curtly, “And that was it.”

But that wasn’t it. Emma could not erase Alan from her life. After their meeting in London, they met several times. Just before Christmas of 2016, Alan presented her with a Swarovski bracelet. “She bought me Hugo Boss socks,” Alan told me, “They’re not cheap.”

“It was a relationship that we built ... You develop a friendship, you talk ...” she explained, her voice breaking as she described their toxic relationship. She was helplessly bonded to Alan and he was obsessed with her, high on virtual validation: “She made me feel like I was a teenager again,” he told me.

I wondered if Alan arrived in London hoping that Emma would overlook the difference between him and the model. Maybe his email slipup was just part of a “bait and switch.”

But Emma could tell the difference. “Things started to get a little bit sour between us,” Alan said. “There was a kind of breakdown after Christmas ... her attention suddenly turned more focused toward finding him.” Alan sensed he was competing with the Turkish model for Emma’s affections. He had deleted his fake accounts, and focused his attention on her. Now, he dreaded he would lose her to the man he had unwittingly thrown in her path—an ironic demise worthy of Shakespeare. “I just put two and two together,” Alan said. “I reckoned that they are talking behind the scenes.”

* * *

By January of 2017, the conversation between Emma and Adem had reignited. “I’m not a religious guy,” Adem said, but it felt like fate had pulled them together. They stopped talking about Alan’s scam, and very slowly the conversation between the shy model and Emma, who had so recently been burned, became emotionally charged. But Emma told her sister, Gaëlle, that she felt like she was just starting another long-distance affair. This time, she wouldn’t be played for a fool, and she wouldn’t waste a moment. She invited Adem to London. “It wasn’t to flirt, believe me,” Emma insisted. Adem said yes immediately. He was curious to meet this beautiful French girl, and sure, in London!

On March 31, 2017, Emma sent her catfish a goodbye text message:

Alan I wanted to tell you that tomorrow I’m going to pick up Adem at the airport. And I still don’t know if it’s good or bad but I’m going to meet ‘my Ronnie.’ You built up all this shit, I’m not sure if I should thank you or detest you for that. But this is happening.

It was April Fool’s Day, 2017, when Emma stood beneath the giant arrivals board at London’s Heathrow Airport, searching for Adem’s flight. When a lady beside her noticed her shaking hands, Emma explained that she was waiting for a man from the internet, whom she had never met. The woman froze. “You have to be very careful!” She warned, on the internet not everyone is who they say they are.

“Well actually, I know ...” Emma began, but the Turkish passengers were already flooding into the arrivals hall.

“Oh my God, it’s happening,” she thought.

When the crowd parted, she saw him walking toward her in a white T-shirt and a blue cardigan, the man in her photographs, come to life. Adem was taller than she expected, and when he recognized her, she felt breathless. As they hugged in the middle of the airport, Emma thought that he smelled “fantastique.”

In a quiet corner, Emma produced an egg-and-mayonnaise sandwich, which she had bought in case Adem was hungry. When he lifted it to his mouth, she noticed his hands were shaking too. “I was really nervous,” Adem said. They walked into the bitter cold air, and Emma summoned an Uber. It seemed to take forever. Adem was very quiet and there was a nervous energy between them. When he stepped off the curb to look for their car, Adem turned around and found Emma at eye level.

Inexplicably, she kissed him.

“Three minutes later I felt like I know her a long time,” Adem said. The spark was undeniable. She gave him a key to her apartment, and together they discovered the city like tourists, goofing around with a selfie stick. Later, when Adem opened his suitcase, Emma spotted the leather jacket from her favorite photograph, and felt starstruck. And Adem couldn’t believe his luck—his soul mate had appeared in his inbox as if by magic.

Adem and Emma take a selfie near their home in Richmond Park, in the London Borough of Richmond upon Thames. (Courtesy of Emma Perrier)

On April 23, 2017, their story became a tabloid sensation in England. “My catfish became cupid,” Emma told the Daily Mirror, “And now we’re living happily ever after.” Soon, other victims of Alan Stanley reached out to Emma. One woman from New York said she had been in a relationship with Ronnie for “years.” When the newspapers described Alan as a “love rat,” he endured summits about his behavior with his colleagues and employer, and an “awful” conversation with his daughter.

“These last few months have been beyond stressful,” he told me. “I don’t think I’ve slept properly for three or four months now.” Overwhelmed by shame, he moved to a faraway town. But even Alan felt relieved that the story ended in comedy, not a tragedy.

“I think it’s brilliant Emma and Adem have met,” he said. “It’s almost like fate.” Alan added that he no longer uses fake identities, and has since met someone special, he said, on Twitter: “A European lady, younger than me, younger than Emma.” There is someone out for there for everyone, he added. “I don’t consider myself to be particularly good-looking ... I’m not a David Beckham, or a Tom Cruise, or an Adem Guzel.”

When I spoke to the couple in September of this year, they had been living together in London for six months. “He’s lovely,” Emma said, “He’s a lovely man.” Currently, Adem is chasing his acting dreams in London, and says he recently auditioned for Aladdin, the original, Arabian catfishing story. He read for the lead, a street urchin who uses a genie’s magic to pass himself off as a prince to win over a princess—before realizing that he must be himself.

At home there has been confusion. Emma was making a coffee one day when she looked over and realized: God, this is Adem, not Ronnie. She says Adem is quite different from the gregarious character invented by Alan—he is quiet and sensitive. There are other challenges: Turkey is not yet in the European Union, so Adem can only stay in London for six months at a time, and cannot work. But Emma now admits that the internet is an instrument of fate.

One evening, not long ago, Emma was closing down Zizzi after a busy shift. Night shifts were once her loneliest times, when she would long for “Ronnie” to materialize from the internet and sweep her off her feet. But that night, she noticed Abraham, the disbelieving Spanish waiter, and the rest of the crew, staring at the handsome gentleman waiting in the doorway, ready to take her home.

Radio Atlantic: Derek Thompson and the Moonshot Factory
October 19th, 2017, 10:30 AM

Few journalists have gotten a peek inside X, the secretive lab run by Google's parent company Alphabet. Its scientists are researching cold fusion, hover boards, and stratosphere-surfing balloons. Derek Thompson, staff writer at The Atlantic, spent several days with the staff of X. In this episode, he tells Matt and Alex all about what he found, and what it suggests about the future of technological invention.

Links:

Free Money at the Edge of the Tech Boom
October 19th, 2017, 10:30 AM

The latest experiment in a universal basic income will be coming to Stockton, California, in the next year.

With $1 million in funding from the tech industry–affiliated Economic-Security Project, the Stockton Economic-Empowerment Demonstration (SEED) will be the country’s first municipal pilot program. As currently envisioned, some number of people in Stockton will receive $500 per month. That’s not enough to cover all their expenses, but it could help people with rising housing costs, paying student loans, or simply saving for life’s inevitable problems.

Last year, Stockton rents rose more than 10 percent, putting the city’s rental price growth among the top 10 in the nation. This is quite a surprise in what Time called “America’s most miserable city” just three years ago. The average rent remains a modest-by-Bay-standards $1,051, but Stockton has a per-capita income of just $23,046, more than $6,000 less than the U.S. median and a full $8,500 less than the California median. If you made the per-capita income of the city, average rent alone would eat 55 percent of your income.

As the tech boom that began in the mid-00s continues, its financial blast radius keeps expanding. Tech workers have been streaming into the Bay, yet few homes have been built in the Bay Area’s cities. Home prices and rents have exploded. Longtime residents and newcomers alike have been getting pushed ever further out. And in recent years, Stockton—once one of the cheapest cities to live in California—has become the eastmost outpost of the insane Bay housing market.

“There’s not a shortage of housing. There’s a shortage of money to buy housing,” said Fred Sheil, a member of STAND Affordable Housing in Stockton. “Unless you’ve got Bay Area income, they aren’t interesting in talking to you.”

That’s garnered the attention of city leaders, especially Mayor Michael Tubbs, who became the youngest-ever mayor of a medium-sized city when he won a landslide election in 2016. Tall, gregarious, often besuited with a trim beard, Tubbs could become the new face of universal basic income, or as people abbreviate it, UBI.

Stockton won’t be the first UBI project in the Bay (pilots are already in the field in West Oakland and San Francisco), but it would be the first public attempt to show what a basic income can do for people. Unlike the secretive other projects, both the local government and the participants will be reporting what the cash does for them. And the project will be occurring within the context of a regular city government, with all the community engagement that entails.

“The [UBI] conversation is not being had with the people who are going to be impacted,” Tubbs said. “Mark Zuckerberg don’t need $500 a month.”

So, in Stockton, they are planning a six- to nine-month design process to incorporate the city’s residents into the program design, including precisely how the cash stipends will be awarded.

“My bias is that it should go to people who need it the most, but that’s not truly universal. That’s targeted,” he said. “The way our country is now, for something like this to work, everybody has to feel like they are a part of it.”

One idea they’re kicking around is that a specific number of slots would be reserved for what they call their “promise zone” in south Stockton, where they’ve done a lot of existing economic research and development work.

Tubbs, too, approaches the idea of a minimum income from an entirely different place than Silicon Valley’s scions. Most of the tech proponents of UBI have approached the topic through the lens of automation and the massive devaluation of human labor that they think could result from further developments in artificial intelligence. While giving cash to everyone has an egalitarian ring, when the message is delivered by the ultra-wealthy of Menlo Park and San Francisco, it can feel as if UBI is the crumbs being swept off the real-money table to buy off the masses.

But Tubbs referenced a strain of African American thought expressed by no less a leader than Martin Luther King Jr. “The solution to poverty is to abolish it directly by a now widely discussed measure: the guaranteed income,” King argued in 1967. Though Tubbs didn’t mention them, the previous year, the Black Panthers came out with their famous 10-Point Program. And there it is in point number two: “We believe that the federal government is responsible and obligated to give every man employment or a guaranteed income.”

Perhaps it’s not surprising that different black thinkers in the 1960s came to the conclusion that a guaranteed income would be an effective way to fight the poverty that resulted from structural racism. They’d just seen a generation of federal programs make white Americans much, much wealthier, while also seeing how those same policies discriminated against them. The big programs that were created during the New Deal were boxed in by what historian Ira Katznelson calls “the Southern cage.” In exchange for creating socialistic Federal programs, the then-Democrats of the south required policies that would reinforce the racial hierarchy of the country. Black people’s freedom and economic prospects were the bargaining chip that Franklin Delano Roosevelt and the Congresses he worked with slid over to former slave states in exchange for their support of sweeping legislation.

For example, FDR would create the Federal Housing Authority, but segregation and redlining would combine to create disinvestment in increasingly segregated black neighborhoods across the nation. FDR would get Social Security, but many job categories in which black people predominated would be exempted from inclusion. The GI Bill might have helped black people get an education, but they could not take equal advantage of the Veterans Administration housing benefits because of racist real-estate practices. Job and social programs might seem nice, but the experience of what could happen to nice ideas within American bureaucracy might have made simple cash payments seem more racism-proof than the alternatives.

But Tubbs is not a theoretician or activist. He is the mayor of a poor city, and he knows that people in Stockton need money not just to survive, but to try to lever themselves out of the lower-income brackets through education or entrepreneurship.

In preparation for the UBI project, Tubbs had a convening in his old city-council district (where he grew up) in south Stockton with upper-income, middle-income, and poor people.

“We said, ‘What would you do with an extra $500 a month?’” Tubbs said. “One woman said, ‘It’s summer, so that’d be great because my kids are coming back from college and my bills go up. One person said, ‘I’d probably save that up to start a business. One person said, ‘I’d go back to school.’ It wasn’t: ‘I’m gonna buy a TV or a car.’”

For the poorest people in Stockton, it could help them transition from being on the streets into some kind of housing, or from temporary housing into something more permanent. Extra cash could help people stay in their homes, rather than getting evicted. “Don’t get me started talking about Evicted,” he told me, referencing the surprise hit book by Harvard sociologist Matthew Desmond about the lives of poor people in Milwaukee.

“There was one line where he said, ‘Poor black men were locked up. Poor black women were locked out,’” Tubbs said. Locked into prison, locked out of homes from which they’d been evicted.

The lessons of the book hit close to home. He grew up in south Stockton, spending his elementary-school days in Louis Park Estates, a few blocks of nearly identical two-story condos just across the water from Rough and Ready Island. (Yes, that is its real name.)

“I’m not sure why they call it ‘estates.’ It’s a bunch of condominiums with stray cats walking around,” Tubbs jokes. “Growing up, when I'd throw out the trash, I’d toss it and dart because all the cats would come running. That’s why I still don’t like cats.”

On a recent afternoon, there were kids playing in the small and connected front yards, a few older folks perched on plastic chairs. An ancient gentleman in a brown zoot suit that might have been purchased in that cut’s heyday stepped creakily out of a Cadillac. It was closer to idyllic than dystopian, but every window had a set of heavy bars, even the second-story ones. And on one lawn, a family’s possessions were scattered everywhere, around a U-Haul that had been driven up onto the grass. If it was not an eviction, the scene spoke of some kind of hasty retreat.

* * *

Soon, there will be 1,000 more jobs in South Stockton. Amazon recently committed to building a 600,000-square-foot facility in the area.

That’s on top of a million-square-foot facility in a huge and developing logistics hub in Tracy. That’s about 20 minutes down I-205, right at the base of the Altamont Pass, which separates the Central Valley from the East Bay.

Once a sleepy agricultural area, it finds itself a logistics hub for dozens of companies. The wealthy Bay Area is nearby. There is great highway access. The Port of Oakland is through the pass. The land is cheap. And most importantly, the companies want to access “a laborshed” that extends outside the Bay.

A single developer, the logistics-focused real-estate investment trust Prologis, is developing 1,800 acres next to existing facilities for Costco and Safeway. Their first big lease went to Amazon, which snapped up a million-square-foot building that was the first warehouse to be built in the Central Valley after the Great Recession. Now thousands of people work in the warehouse alongside a fleet of robots.

A newly constructed warehouse in Tracy (Alexis Madrigal)

“When I got in the business 10 years ago, people cared about how many truck stalls do you have, how many doors do you have, what’s your clear height,” Ryan George, the Prologis investment officer working on the Tracy project, told me. “That’s all still important, but what drives the discussion now is where is my labor? How do I compete to attract and retain labor?”

Several logistics-industry publications back up George’s assertion. There is a widely acknowledged labor “shortage” in logistics, which has been exacerbated by Amazon’s growth. That’s driven up wages beyond traditional brick-and-mortar retail jobs, but not high enough to retain employees in high-cost regions.

And that’s why Stockton and the surrounding small towns are so attractive. “Some companies are trading transportation advantages for locations that have a desirable labor pool,” wrote Logistics Management in August of this year.

At the same time, a report from the Material Handling Institute and Deloitte Consulting found that many companies expected a major increase in adoption of automation and robotics over the coming years in part because of how hard it is to find the cheap workers that make e-commerce go.

“The fact is that there are 600,000 [warehouse] jobs that are going unfilled in the United States and that gap is getting bigger and bigger,” Fetch Robotics CEO Melonee Wise told me late last year. “The turnover rate for any manufacturing or warehouse job is about 25 percent. And so, there is a need for automation because people aren’t showing up to do the work.”

And ever more e-commerce, which requires a ton of shipping, has added a new wrinkle to the structural problems: It’s highly seasonal. That’s where places like Tracy come into the equation. It’s close enough to serve the Bay Area’s wealthy, but can tap the labor pool not just in Stockton and Sacramento, but all the way out to the migrant workers of the Central Valley.

“The Central Valley in general has a big advantage. To put it in the simplest terms, there are folks out there picking tomatoes in the summertime,” George told me. “They don’t have anything to do in November, December, January. So that’s when they are helping when Amazon triples their employees. And it’s not unique to Amazon.”

Faced with these labor-market conditions, companies have a few options. They can pay out more in wages and offer more perks. They can add tech, in the form of robotics, trying to drive down the amount of labor they need. They can reduce the amount of training and responsibility the average worker needs, so all the people who churn through are roughly interchangeable.

The problem is that that latter two decisions usually make the jobs even worse, exacerbating the wage problem.

George takes me on a driving tour of the vast development. Out here in the back end of e-commerce, drought-tolerant plants line the boulevards, fed only by recycled water. There are bike paths and glassy office parks and little hints of the area’s previous life: an irrigation canal, a railroad crossing.

George stops so that we can watch the construction of a perfectly flat plane onto which concrete will be poured to create the foundation of another enormous building. We talk about how the town of Tracy has received the new development. Though the city has been supportive, some residents don’t want the new development.

“People don’t realize this is where the future is. No one’s going to shopping malls. Shopping malls are going into here, right?” he says, pointing at the soon-to-be building.

A warehouse under construction (Alexis Madrigal)

Looking around, this does seem like the perfect place for a warehouse. We’re surrounded by highways, a wind farm, huge transmission lines, aqueducts. This is the shadow infrastructure of the Bay Area, the place where the physical systems that underly even the most phone-dependent life take shape. There are jobs in making those systems work, but they may not be ones that people want to do.

It’s a fascinating paradox. While Mayor Tubbs worries about how to structure UBI and get decent jobs into his city, the logistics people are fretting about not having enough workers to fill the slots and how to purchase more robots to reduce the need for human labor.

Even out here, two hours from Silicon Valley on a good day, the tech industry is shaking up civic and economic life. Would a truly universal UBI make hiring even more difficult, thereby driving even more automation? Given that not enough people seem to want warehouse jobs, is that necessarily a bad thing?

In San Francisco, the idea of a universal basic income can drive derisive snorts as a payoff from the tech overlords, but in Stockton, they’ll take all the help they can get.

Your Bones Live On Without You
October 18th, 2017, 10:30 AM

A skeleton is a human being in its most naked form. A life stripped down to its essence. As the foundation of our bodies—indeed, of our very being—skeletons provoke equal measures of fascination and terror.

As an archaeologist excavating burials, I’ve felt connected to another person—separated by centuries of time—by touching their remains. I’ve observed how exhibits of Egyptian mummies and plastinated bodies inspire wonder for others. But as a museum curator, I’ve also learned that for many cultures, human remains are not organic material to be exploited for science, but rather the sacred remnants of ancestors to be revered.

Our physical bodies will exist as motionless bones far longer than as animate flesh. And human skeletons evoke powerful reactions, from reverence to fear, when they’re encountered. Those features imbue skeletons with a surprising power. Through them, people can live on through their earthly remains.

* * *

Bones are an ancient obsession. Archaeologists recently revealed an 11,000-year-old “skull cult” in Turkey. Humanity’s first farmers also ritually de-fleshed, carved, and displayed human crania. For a thousand years, Japanese folklore has warned of the gashadokuro, a colossal starving skeleton who feasts on the living in the dark of night. The Chimbu tribe of Papua New Guinea intimidates enemies by painting their entire bodies in frightening versions of skeletons, becoming an army of the dead. In medieval Europe, the skeleton was commonly portrayed as a memento mori—a reminder of the inevitability of death.

From Hamlet’s gaze into the eye sockets of the departed court jester, to Paris’s underground catacombs (where there are 6 million skeletons for the public to view), to the laughing skulls carved on pumpkins for Halloween, human bones continue to haunt the collective imagination.

My own imagination was stirred when I excavated my first grave along Highway 188 in central Arizona more than 20 years ago. The road needed to be realigned, but more than 300 burials stood in its way, left some 750 years ago by a Native American group scholars call the Salado. As cars whizzed by, I dug into the soft dirt to reveal pearl-white bones. The archaeological work was slow and painstaking—not only because of the sheer number of burials, but also because of their dazzling contents. In the Southwest, ancient graves typically consist of the bones of the dead along with a few nonperishable artifacts, such as pottery or stone. Here, the graves were loaded with shell and turquoise jewelry, stone animal carvings, bone hairpins, and whole jars and stone points.

I had already uncovered two bodies in the shallow pit, and now a third skull appeared. When I finished exposing the left hand of this third individual, I gasped. Her hand was situated just below the right hand of the second. I realized that the pair likely died at the same time and were placed in the grave side-by-side, holding hands.

It was a moment that would shape my view of what human remains mean. Seeing those two ancients tenderly touching each other in death, I had an immediate link to their history, previously lost to the past. But I also felt their humanity surround me in the present.

* * *

Science tends to take a cold view of the dead. Bones, which The Anatomy and Biology of the Human Skeleton describes as the “remnants of mineralized connective tissue,” are made up of cells arranged in a matrix like a spiderweb. When living, they are a bank of salts, calcium, and red blood cells. Adult humans have 206 bones, which shelter vital organs while also working in concert with muscles to give humans their characteristic fluid rigidity. Though made from soft tissue, bones are tremendously strong. They can heal themselves. The human skeleton is a brilliant feat of evolution.

It took centuries for humans to understand it. More than 2,000 years ago, in what is now Turkey, the physician and philosopher Galen undertook one of the first systematic studies of human anatomy. He got it mostly right, but also seeded myths—such as the theory that bones consist of the same matter as semen because they share a similar color.

Later, the Persians took great interest in anatomy, advancing its knowledge along with scholars in China, Japan, and India. But it wasn’t until the cusp of the European Renaissance that a renewed interest in human dissection led to detailed studies of the body’s architecture. The greatest researcher of this period was Leonardo da Vinci, whose detailed illustrations accurately revealed the body’s inner workings. By the 16th century, articulated human skeletons hung in anatomy theaters across Europe.

During the centuries that followed, the science of the human skeleton took a darker turn. Between 1839 and 1849, Samuel G. Morton published his three-volume Crania Americana, which purported to prove the superiority or inferiority of races based on measurements of their skulls. Based on these racist ideas, museums collected thousands of skeletons—mostly of Native Americans, since their graveyards were easiest to pillage.

Today’s researchers reject such views, of course. Biological inheritance is intertwined with behavior, environment, and culture. People are born with bones, but those bones respond to the world that contains them and bodies that live atop their scaffolding. This is why the skeleton continues to be so valuable to archaeologists. Excavated remains tell the stories of the dead—a person’s sex and age at death, along with their disorders and diseases, traumas and infections, clues to their diet, what hand they used most, how hard they worked. Bones are also a vessel for DNA, which allows scientists to trace the migrations of ancient humans and even discover who they had sex with.

* * *

Some cultures intentionally display their dead. The Torajans on the Indonesian island of Sulawesi, for example, mummify deceased relatives and keep them in their homes, talking to them and feeding them. Yet many people around the world are distraught that their ancestors lie as specimens on museum shelves.

Some years ago, a group of Native Americans came to visit their ancestors’ remains in the storage area of the museum where I work. They asked me to turn off the lights. We were engulfed in darkness when an elder struck a match and lit a bundle of sage, the sweet smoke filling the air. He then sang a song so loudly that the metal of the cabinets reverberated like an accompanying drumbeat. He said he wanted to be sure that his ancestors’ spirits knew he was there—that he remembered them and cared for them.

Bones are not the same as shards of pottery or beaded moccasins. In 1990, after years of protest, Native Americans secured a federal law that established a process for the return of human remains, funerary offerings, and other cultural items from museums. In the years since, more than 57,000 Native American skeletons and 1.7 million burial goods have been repatriated (although more than 100,000 skeletons and millions more artifacts are still in U.S. museums). This movement has become global, as indigenous peoples in New Zealand, Australia, Canada, and parts of Africa have demanded the return of their dead.

While some scientists and museums have pushed back against such claims, the native peoples and the scientists agree more than they might realize. Most Native Americans and indigenous peoples do not oppose science; they object to the form of science that robs bodies of their humanity, especially without consent. Likewise, Westerners also respect skeletons when given the opportunity. In 2012, workers discovered a shallow, unmarked grave under a parking lot in Leicester, England. The skeleton it held, scholars soon confirmed, belonged to King Richard III. For more than 500 years, no one had known the exact fate of the English monarch, long portrayed as a tyrant and murderer. The discovery was a revelation. In his bones lay vital clues about the monarch’s life and last days.

Unlike Native Americans, however, the king didn’t go into a museum. His remains sparked an outpouring of grief and love. Locals raised more than $250,000 for a funeral. The body was laid in an oak coffin in Leicester’s Anglican cathedral. Thousands came to view him. After three days, in an intricate ceremony, 10 British Army soldiers carried Richard III to a marble tomb.

In this moment, Richard III was made a king once again, given a fleeting but vitalized second life. His skeleton provoked new ideas about his biography and England’s history. The mere presence of the bones got the living to fund and attend a burial with the pomp and circumstance befitting royalty.

* * *

A fork does not eat. A painting does not gaze. A book cannot think. But objects do induce humans to act and feel. A fork affords nourishment; a painting creates the experience of beauty; a book stimulates learning. Through their form, cultural function, historical role, or inherent qualities, objects exert their influence and power.

Perhaps nothing does this more profoundly than human bones. They are the medium through which people live on after death. The sight of skeletons can draw or repel. When used for historical purposes, they provide answers about life. When used spiritually, they provoke questions about what lies after death. Perhaps this is why people feel the power of skeletons so viscerally. They seem alive and dead all at once. That’s why they live on so vibrantly, and why people can’t help to react to them with both awe and fear. You and I and everyone else will surely die, but our bones will live on without us.


This article appears courtesy of Object Lessons.

Twitter's Harassment Problem Is Baked Into Its Design
October 18th, 2017, 10:30 AM

The first recorded example in Western literature of men telling women to shut up and stay in the house, writes classicist Mary Beard in her 2014 essay, “The Public Voice of Women,” is in the Odyssey. Not-yet-grown Telemachus tells his mother, Penelope, to “go back up into your quarters, and take up your own work, the loom and the distaff ... speech will be the business of men, all men, and of me most of all.

As Beard noted in her essay, centuries on, the voices of women are still considered illegitimate in the public sphere, including the new spaces of social media. That manifests as verbal harassment, death threats, and doxing online; as complaints about the sound of women’s literal voices on the radio, giving talks, or in podcasts; as sexual harassment in the workplace; as catcalls on the street. All of these can be seen as ways to drive women out of the public sphere, and back to their proper domain of Kinder, Küche, Kirche (children, kitchen, church). On Friday, many Twitter users boycotted the platform, in response to the suspension of the actress Rose McGowan’s account for speaking out about sexual harassment by the film executive Harvey Weinstein. The driving force for the boycott was women outraged that hate speech, including misogynist and racial harassment and threats, routinely go unchecked, and yet McGowan’s account was suspended.

These women did indeed remove themselves from a public sphere. Twitter, with its more than 300 million active monthly users, is a communal space in a new and extraordinary way that’s driven by the specific technological decisions of the site, which carry with them specific affordances. “Affordances,” a term popularized in the world of design and user interaction by Donald Norman, is a way of describing the perceived possibilities of how the user can interact with the product. These affordances shape how users behave.

Much of the power of Twitter comes from retweets, which can carry the words of a user to an audience far beyond their own followers (for comparison, see Instagram, where no such function exists—it makes it much more difficult for a specific image to “go viral” on the site). But retweeting also allows for what social-media researchers such as danah boyd and Alice Marwick refer to as “context collapse”: removing tweets from not only their temporal and geographic context, but also their original social and cultural milieu, which is very different from most public spaces. I described it to a friend once on a New York City subway—“we’re talking in public, in that everyone near us in this subway car can hear what we’re saying, but that’s a very different ‘public’ than hearing ourselves on NPR tomorrow.” While readers may literally know nothing about the poster or the context except for what is said in that one tweet, they can still just hit “reply” and their response will likely be seen by the poster.

While nothing is stopping people from finding out more information before responding, the clearest affordance Twitter has is for these “drive-by” responses (I’ve been mansplained to by many people who I presume haven’t even looked at my bio to see the “engineering professor” there before trying to school me on my research field—per Telemachus, “of me most of all”). This amplification and context collapse, coupled with the ease of replying and of creating bots, makes targeted harassment trivially easy, particularly in an environment where users can both mostly live in their own ideological bubble by following people who share their views, however abhorrent, and who can easily forget that there is a real person behind the 140 characters of text.

So while Twitter may consider itself to be merely reflecting the discourse, these technological affordances ease the way for certain types of hostile behavior. If you think of the experience of the generalized, systemic misogyny and racism of our culture as being bathed in sunlight on a scorching hot day, Twitter might say it’s just a mirror. But it’s actually handing out magnifying glasses that can focus the already painful ambient sunlight into a killing ray. The targets of this ire, in our society and on Twitter, are disproportionately not just women but people of color. (Imagine how Telemachus would have responded if, rather than his mother, one of the non-Greek household slaves chose to speak up in visiting company.)

One of the most profound social changes of the last few decades is opening up public discourse to a broader range of speakers than ever before, and social media has been a large part of that. The specific affordances of Twitter make it powerful—it can amplify marginalized voices but it can also amplify harassment. Friday’s boycott was intended to be a unified stand against that.

But the point of harassment is to shut women up, either by self-censorship through fear or by driving them away from Twitter, making it simply the newest wrinkle in that long history of exclusion from public spaces and conversations. Many women, especially women of color, therefore found a protest that mandated their silence to be ironic, if not outright misguided: It takes a certain amount of social power to genuinely believe that your absence would be remarked upon and lamented. After 3,000 years of denying the public sphere to all but a small set of voices, some of the new voices are rightly considering their presence to be a sit-in, an occupation, and they are rightly refusing to be driven away. Ultimately, if Twitter wants to be the public sphere, it needs to act like it, by working to create an environment where all voices can be safely heard. Twitter’s social problems are exacerbated by the affordances of technology; they’ll need to bring both ongoing human effort and better design decisions to improve the experiences of marginalized people, and therefore everyone, in their public sphere.

Google Maps' Failed Attempt to Get People to Lose Weight
October 17th, 2017, 10:30 AM

On Monday, the reporter Taylor Lorenz noticed that Google Maps had a new feature: Walking distances were delivered in terms of calories.

Instead of simply telling her that a walk would take 13 minutes, the app also converted that to an amount of energy, 59 calories. Then a click on that calorie count gave a further conversion, from calories to food.

Taylor Lorenz / Twitter

Specifically, mini cupcakes with pink frosting.

This was not well received.

Responses varied narrowly. An ostensible measure to promote health was interpreted as a tech corporation policing women’s bodies.

The writer Rachel Joy Larris noted: “‘Cupcake?’ Let’s talk about all the signifiers that contains about assumptions of gender, culture, and food.”

The writer Dana Cass said, referring to the Harvey Weinstein-induced Me Too movement: “Lol every woman I know has been sexually assaulted and Google Maps is telling me how many calories I’ll burn on my walk to work.”

The app offered no option to convert calorie counts into Budweiser or raw venison.

@natalierachel / Twitter

Within hours, BuzzFeed News reported that Google was simply testing the change, and that it “is removing this feature due to strong user feedback.”

Despite a boom in fitness apps and $1,200 watches that track physical activity, many people do not want to be reminded of calories unless requested. While this sort of nudge may benefit some people, among others the concern is that overwhelming focus on intake and output can drive bulimia or anorexia. In either case, unsolicited calorie counts and cupcake equivalents have an air of body policing and guilt inducement that do not pair well with a culture that assiduously regulates women’s appearances. As writer Casey Johnston offered, “Any woman could have told you this is a supremely bad thing a) to do b) to not be able to turn off.”

In the spirit of no-one-size-fits-all solutions in health, there is more logic in Google considering this as an opt-in feature rather than a default. Tailoring the experience to users in ways safe and driven by evidence would mean more thought than simply forcing pink-cupcake counts on unsuspecting people.

For instance, Google estimated, “The average person burns 90 calories by walking one mile.” Calorie counts vary widely from person to person—walking a mile is a much less energy-intensive endeavor for a professional endurance athlete than a veteran of World War II. Google presumably has the personal data on most of us to make a much more precise calculation—and to suggest more specific incentives than cupcakes or burning calories.

I’ve argued many times that calorie bartering is not usually an effective approach to weight loss or health. Calories offer no insight into the nutritional value of a food, and they are often used by sellers of junk to convince people that they can eat junk if they simply exercise the calories away. But the metabolic effects of 100 calories of Coke on future hunger and energy storage are not the same as a 100 calorie salad, any more than introducing any two 100-pound people would have the same effect on a dinner party.

All of this is part of the consistent theme that obesity prevention is much less straightforward than other public-health challenges. Metabolic syndrome is unique among deadly preventable conditions—it is not the equivalent to if Google Maps were able to track swarms of Zika-infected mosquitoes and suggest alternate routes.

As our behavior is shaped more and more by interactions with phones, our health is shaped by the world that comes to us through apps. The effects can be beneficial or otherwise, but they will not be neutral. This means a serious burden/opportunity on designers to advocate responsibly and strategically for health. That means reckoning with the individual and societal stigma of states of health that affect our outward appearances, and those which are tied to ideas of guilt and moral judgment, and finding ways to make health easy without compromising any individual’s sense of agency in deciding what degree of health they choose to pursue.

What Facebook Did to American Democracy
October 15th, 2017, 10:30 AM

In the media world, as in so many other realms, there is a sharp discontinuity in the timeline: before the 2016 election, and after.

Things we thought we understood—narratives, data, software, news events—have had to be reinterpreted in light of Donald Trump’s surprising win as well as the continuing questions about the role that misinformation and disinformation played in his election.

Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election. Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

* * *

We’ve known since at least 2012 that Facebook was a powerful, non-neutral force in electoral politics. In that year, a combined University of California, San Diego and Facebook research team led by James Fowler published a study in Nature, which argued that Facebook’s “I Voted” button had driven a small but measurable increase in turnout, primarily among young people.

Rebecca Rosen’s 2012 story, “Did Facebook Give Democrats the Upper Hand?” relied on new research from Fowler, et al., about the presidential election that year. Again, the conclusion of their work was that Facebook’s get-out-the-vote message could have driven a substantial chunk of the increase in youth voter participation in the 2012 general election. Fowler told Rosen that it was “even possible that Facebook is completely responsible” for the youth voter increase. And because a higher proportion of young people vote Democratic than the general population, the net effect of Facebook’s GOTV effort would have been to help the Dems.

The research showed that a small design change by Facebook could have electoral repercussions, especially with America’s electoral-college format in which a few hotly contested states have a disproportionate impact on the national outcome. And the pro-liberal effect it implied became enshrined as an axiom of how campaign staffers, reporters, and academics viewed social media.

In June 2014, Harvard Law scholar Jonathan Zittrain wrote an essay in New Republic called, “Facebook Could Decide an Election Without Anyone Ever Finding Out,” in which he called attention to the possibility of Facebook selectively depressing voter turnout. (He also suggested that Facebook be seen as an “information fiduciary,” charged with certain special roles and responsibilities because it controls so much personal data.)

In late 2014, The Daily Dot called attention to an obscure Facebook-produced case study on how strategists defeated a statewide measure in Florida by relentlessly focusing Facebook ads on Broward and Dade counties, Democratic strongholds. Working with a tiny budget that would have allowed them to send a single mailer to just 150,000 households, the digital-advertising firm Chong and Koster was able to obtain remarkable results. “Where the Facebook ads appeared, we did almost 20 percentage points better than where they didn’t,” testified a leader of the firm. “Within that area, the people who saw the ads were 17 percent more likely to vote our way than the people who didn’t. Within that group, the people who voted the way we wanted them to, when asked why, often cited the messages they learned from the Facebook ads.”

In April 2016, Rob Meyer published “How Facebook Could Tilt the 2016 Election” after a company meeting in which some employees apparently put the stopping-Trump question to Mark Zuckerberg. Based on Fowler’s research, Meyer reimagined Zittrain’s hypothetical as a direct Facebook intervention to depress turnout among non-college graduates, who leaned Trump as a whole.

Facebook, of course, said it would never do such a thing. “Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community,” a spokesperson said. “We as a company are neutral—we have not and will not use our products in a way that attempts to influence how people vote.”

They wouldn’t do it intentionally, at least.

As all these examples show, though, the potential for Facebook to have an impact on an election was clear for at least half a decade before Donald Trump was elected. But rather than focusing specifically on the integrity of elections, most writers—myself included, some observers like Sasha Issenberg, Zeynep Tufekci, and Daniel Kreiss excepted—bundled electoral problems inside other, broader concerns like privacy, surveillance, tech ideology, media-industry competition, or the psychological effects of social media.

The same was true even of people inside Facebook. “If you’d come to me in 2012, when the last presidential election was raging and we were cooking up ever more complicated ways to monetize Facebook data, and told me that Russian agents in the Kremlin’s employ would be buying Facebook ads to subvert American democracy, I’d have asked where your tin-foil hat was,” wrote Antonio García Martínez, who managed ad targeting for Facebook back then. “And yet, now we live in that otherworldly political reality.”

Not to excuse us, but this was back on the Old Earth, too, when electoral politics was not the thing that every single person talked about all the time. There were other important dynamics to Facebook’s growing power that needed to be covered.

* * *

Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.

What’s crucial to understand is that, from the system’s perspective, success is correctly predicting what you’ll like, comment on, or share. That’s what matters. People call this “engagement.” There are other factors, as Slate’s Will Oremus noted in this rare story about the News Feed ranking team. But who knows how much weight they actually receive and for how long as the system evolves. For example, one change that Facebook highlighted to Oremus in early 2016—taking into account how long people look at a story, even if they don’t click it—was subsequently dismissed by Lars Backstrom, the VP of engineering in charge of News Feed ranking, as a “noisy” signal that’s also “biased in a few ways” making it “hard to use” in a May 2017 technical talk.

Facebook’s engineers do not want to introduce noise into the system. Because the News Feed, this machine for generating engagement, is Facebook’s most important technical system. Their success predicting what you’ll like is why users spend an average of more than 50 minutes a day on the site, and why even the former creator of the “like” button worries about how well the site captures attention. News Feed works really well.

But as far as “personalized newspapers” go, this one’s editorial sensibilities are limited. Most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. And this is true not just in politics, but the broader culture.

That this could be a problem was apparent to many. Eli Pariser’s The Filter Bubble, which came out in the summer of 2011, became the most widely cited distillation of the effects Facebook and other internet platforms could have on public discourse.

Pariser began the book research when he noticed conservative people, whom he’d befriended on the platform despite his left-leaning politics, had disappeared from his News Feed. “I was still clicking my progressive friends’ links more than my conservative friends’— and links to the latest Lady Gaga videos more than either,” he wrote. “So no conservative links for me.”

Through the book, he traces the many potential problems that the “personalization” of media might bring. Most germane to this discussion, he raised the point that if every one of the billion News Feeds is different, how can anyone understand what other people are seeing and responding to?

“The most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument. As the number of different segments and messages increases, it becomes harder and harder for the campaigns to track who’s saying what to whom,” Pariser wrote. “How does a [political] campaign know what its opponent is saying if ads are only targeted to white Jewish men between 28 and 34 who have expressed a fondness for U2 on Facebook and who donated to Barack Obama’s campaign?”

This did, indeed, become an enormous problem. When I was editor in chief of Fusion, we set about trying to track the “digital campaign” with several dedicated people. What we quickly realized was that there was both too much data—the noisiness of all the different posts by the various candidates and their associates—as well as too little. Targeting made tracking the actual messaging that the campaigns were paying for impossible to track. On Facebook, the campaigns could show ads only to the people they targeted. We couldn’t actually see the messages that were actually reaching people in battleground areas. From the outside, it was a technical impossibility to know what ads were running on Facebook, one that the company had fought to keep intact.

Pariser suggests in his book, “one simple solution to this problem would simply be to require campaigns to immediately disclose all of their online advertising materials and to whom each ad is targeted.” Which could happen in future campaigns.

Imagine if this had happened in 2016. If there were data sets of all the ads that the campaigns and others had run, we’d know a lot more about what actually happened last year. The Filter Bubble is obviously prescient work, but there was one thing that Pariser and most other people did not foresee. And that’s that Facebook became completely dominant as a media distributor.

* * *

About two years after Pariser published his book, Facebook took over the news-media ecosystem. They’ve never publicly admitted it, but in late 2013, they began to serve ads inviting users to “like” media pages. This caused a massive increase in the amount of traffic that Facebook sent to media companies. At The Atlantic and other publishers across the media landscape, it was like a tide was carrying us to new traffic records. Without hiring anyone else, without changing strategy or tactics, without publishing more, suddenly everything was easier.

While traffic to The Atlantic from Facebook.com increased, at the time, most of the new traffic did not look like it was coming from Facebook within The Atlantic’s analytics. It showed up as “direct/bookmarked” or some variation, depending on the software. It looked like what I called “dark social” back in 2012. But as BuzzFeed’s Charlie Warzel pointed out at the time, and as I came to believe, it was primarily Facebook traffic in disguise. Between August and October of 2013, BuzzFeed’s “partner network” of hundreds of websites saw a jump in traffic from Facebook of 69 percent.

At The Atlantic, we ran a series of experiments that showed, pretty definitively from our perspective, that most of the stuff that looked like “dark social” was, in fact, traffic coming from within Facebook’s mobile app. Across the landscape, it began to dawn on people who thought about these kinds of things: Damn, Facebook owns us. They had taken over media distribution.

Why? This is a best guess, proffered by Robinson Meyer as it was happening: Facebook wanted to crush Twitter, which had drawn a disproportionate share of media and media-figure attention. Just as Instagram borrowed Snapchat’s “Stories” to help crush the site’s growth, Facebook decided it needed to own “news” to take the wind out of the newly IPO’d Twitter.

The first sign that this new system had some kinks came with “Upworthy-style” headlines. (And you’ll never guess what happened next!) Things didn’t just go kind of viral, they went ViralNova, a site which, like Upworthy itself, Facebook eventually smacked down. Many of the new sites had, like Upworthy, which was cofounded by Pariser, a progressive bent.

Less noticed was that a right-wing media was developing in opposition to and alongside these left-leaning sites. “By 2014, the outlines of the Facebook-native hard-right voice and grievance spectrum were there,” The New York Times’ media and tech writer John Herrman told me, “and I tricked myself into thinking they were a reaction/counterpart to the wave of soft progressive/inspirational content that had just crested. It ended up a Reaction in a much bigger and destabilizing sense.”

The other sign of algorithmic trouble was the wild swings that Facebook Video underwent. In the early days, just about any old video was likely to generate many, many, many views. The numbers were insane in the early days. Just as an example, a Fortune article noted that BuzzFeed’s video views “grew 80-fold in a year, reaching more than 500 million in April.” Suddenly, all kinds of video—good, bad, and ugly—were doing 1-2-3 million views.

As with news, Facebook’s video push was a direct assault on a competitor, YouTube. Videos changed the dynamics of the News Feed for individuals, for media companies, and for anyone trying to understand what the hell was going on.

Individuals were suddenly inundated with video. Media companies, despite no business model, were forced to crank out video somehow or risk their pages/brands losing relevance as video posts crowded others out.

And on top of all that, scholars and industry observers were used to looking at what was happening in articles to understand how information was flowing. Now, by far the most viewed media objects on Facebook, and therefore on the internet, were videos without transcripts or centralized repositories. In the early days, many successful videos were just “freebooted” (i.e., stolen) videos from other places or reposts. All of which served to confuse and obfuscate the transport mechanisms for information and ideas on Facebook.

Through this messy, chaotic, dynamic situation, a new media rose up through the Facebook burst to occupy the big filter bubbles. On the right, Breitbart is the center of a new conservative network. A study of 1.25 million election news articles found “a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.”

Breitbart, of course, also lent Steve Bannon, its chief, to the Trump campaign, creating another feedback loop between the candidate and a rabid partisan press. Through 2015, Breitbart went from a medium-sized site with a small Facebook page of 100,000 likes into a powerful force shaping the election with almost 1.5 million likes. In the key metric for Facebook’s News Feed, its posts got 886,000 interactions from Facebook users in January. By July, Breitbart had surpassed The New York Times main account in interactions. By December, it was doing 10 million interactions per month, about 50 percent of Fox News, which had 11.5 million likes on its main page. Breitbart’s audience was hyper-engaged.

There is no precise equivalent to the Breitbart phenomenon on the left. Rather the big news organizations are classified as center-left, basically, with fringier left-wing sites showing far smaller followings than Breitbart on the right.

And this new, hyperpartisan media created the perfect conditions for another dynamic that influenced the 2016 election, the rise of fake news.

Sites by partisan attention (Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman)

* * *

In a December 2015 article for BuzzFeed, Joseph Bernstein argued that “the dark forces of the internet became a counterculture.” He called it “Chanterculture” after the trolls who gathered at the meme-creating, often-racist 4chan message board. Others ended up calling it the “alt-right.” This culture combined a bunch of people who loved to perpetuate hoaxes with angry Gamergaters with “free-speech” advocates like Milo Yiannopoulos with honest-to-God neo-Nazis and white supremacists. And these people loved Donald Trump.

“This year Chanterculture found its true hero, who makes it plain that what we’re seeing is a genuine movement: the current master of American resentment, Donald Trump,” Bernstein wrote. “Everywhere you look on ‘politically incorrect’ subforums and random chans, he looms.”

When you combine hyper-partisan media with a group of people who love to clown “normies,” you end up with things like Pizzagate, a patently ridiculous and widely debunked conspiracy theory that held there was a child-pedophilia ring linked to Hillary Clinton somehow. It was just the most bizarre thing in the entire world. And many of the figures in Bernstein’s story were all over it, including several who the current president has consorted with on social media.

But Pizzagate was but the most Pynchonian of all the crazy misinformation and hoaxes that spread in the run-up to the election.

BuzzFeed, deeply attuned to the flows of the social web, was all over the story through reporter Craig Silverman. His best-known analysis happened after the election, when he showed that “in the final three months of the U.S. presidential campaign, the top-performing fake election-news stories on Facebook generated more engagement than the top stories from major news outlets such as The New York Times, The Washington Post, The Huffington Post, NBC News, and others.”

But he also tracked fake news before the election, as did other outlets such as The Washington Post, including showing that Facebook’s “Trending” algorithm regularly promoted fake news. By September of 2016, even the Pope himself was talking about fake news, by which we mean actual hoaxes or lies perpetuated by a variety of actors.

The longevity of Snopes shows that hoaxes are nothing new to the internet. Already in January 2015, Robinson Meyer reported about how Facebook was “cracking down on the fake news stories that plague News Feeds everywhere.”

What made the election cycle different was that all of these changes to the information ecosystem had made it possible to develop weird businesses around fake news. Some random website posting aggregated news about the election could not drive a lot of traffic. But some random website announcing that the Pope had endorsed Donald Trump definitely could. The fake news generated a ton of engagement, which meant that it spread far and wide.

A few days before the election Silverman and fellow BuzzFeed contributor Lawrence Alexander traced 100 pro–Donald Trump sites to a town of 45,000 in Macedonia. Some teens there realized they could make money off the election, and just like that, became a node in the information network that helped Trump beat Clinton.

Whatever weird thing you imagine might happen, something weirder probably did happen. Reporters tried to keep up, but it was too strange. As Max Read put it in New York Magazine, Facebook is “like a four-dimensional object, we catch slices of it when it passes through the three-dimensional world we recognize.” No one can quite wrap their heads around what this thing has become, or all the things this thing has become.

“Not even President-Pope-Viceroy Zuckerberg himself seemed prepared for the role Facebook has played in global politics this past year,” Read wrote.

And we haven’t even gotten to the Russians.

* * *

Russia’s disinformation campaigns are well known. During his reporting for a story in The New York Times Magazine, Adrian Chen sat across the street from the headquarters of the Internet Research Agency, watching workaday Russian agents/internet trolls head inside. He heard how the place had “industrialized the art of trolling” from a former employee. “Management was obsessed with statistics—page views, number of posts, a blog’s place on LiveJournal’s traffic charts—and team leaders compelled hard work through a system of bonuses and fines,” he wrote. Of course they wanted to maximize engagement, too!

There were reports that Russian trolls were commenting on American news sites. There were many, many reports of Russia’s propaganda offensive in Ukraine. Ukrainian journalists run a website dedicated to cataloging these disinformation attempts called StopFake. It has hundreds of posts reaching back into 2014.

A Guardian reporter who looked into Russian military doctrine around information war found a handbook that described how it might work. “The deployment of information weapons, [the book] suggests, ‘acts like an invisible radiation’ upon its targets: ‘The population doesn’t even feel it is being acted upon. So the state doesn’t switch on its self-defense mechanisms,’” wrote Peter Pomerantsev.

As more details about the Russian disinformation campaign come to the surface through Facebook’s continued digging, it’s fair to say that it’s not just the state that did not switch on its self-defense mechanisms. The influence campaign just happened on Facebook without anyone noticing.

As many people have noted, the 3,000 ads that have been linked to Russia are a drop in the bucket, even if they did reach millions of people. The real game is simply that Russian operatives created pages that reached people “organically,” as the saying goes. Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, pulled data on the six publicly known Russia-linked Facebook pages. He found that their posts had been shared 340 million times. And those were six of 470 pages that Facebook has linked to Russian operatives. You’re probably talking billions of shares, with who knows how many views, and with what kind of specific targeting.

The Russians are good at engagement! Yet, before the U.S. election, even after Hillary Clinton and intelligence agencies fingered Russian intelligence meddling in the election, even after news reports suggested that a disinformation campaign was afoot, nothing about the actual operations on Facebook came out.

In the aftermath of these discoveries, three Facebook security researchers, Jen Weedon, William Nuland, and Alex Stamos, released a white paper called Information Operations and Facebook. “We have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam, and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” they wrote.

One key theme of the paper is that they were used to dealing with economic actors, who responded to costs and incentives. When it comes to Russian operatives paid to Facebook, those constraints no longer hold. “The area of information operations does provide a unique challenge,” they wrote, “in that those sponsoring such operations are often not constrained by per-unit economic realities in the same way as spammers and click fraudsters, which increases the complexity of deterrence.” They were not expecting that.

Add everything up. The chaos of a billion-person platform that competitively dominated media distribution. The known electoral efficacy of Facebook. The wild fake news and misinformation rampaging across the internet generally and Facebook specifically. The Russian info operations. All of these things were known.

And yet no one could quite put it all together: The dominant social network had altered the information and persuasion environment of the election beyond recognition while taking a very big chunk of the estimated $1.4 billion worth of digital advertising purchased during the election. There were hundreds of millions of dollars of dark ads doing their work. Fake news all over the place. Macedonian teens campaigning for Trump. Ragingly partisan media infospheres serving up only the news you wanted to hear. Who could believe anything? What room was there for policy positions when all this stuff was eating up News Feed space? Who the hell knew what was going on?

As late as August 20, 2016, the The Washington Post could say this of the campaigns:

Hillary Clinton is running arguably the most digital presidential campaign in U.S. history. Donald Trump is running one of the most analog campaigns in recent memory. The Clinton team is bent on finding more effective ways to identify supporters and ensure they cast ballots; Trump is, famously and unapologetically, sticking to a 1980s-era focus on courting attention and voters via television.

Just a week earlier, Trump’s campaign had hired Cambridge Analytica. Soon, they’d ramped up to $70 million a month in Facebook advertising spending. And the next thing you knew, Brad Parscale, Trump’s digital director, is doing the postmortem rounds talking up his win.

“These social platforms are all invented by very liberal people on the west and east coasts,” Parscale said. “And we figure out how to use it to push conservative values. I don’t think they thought that would ever happen.”

And that was part of the media’s problem, too.

* * *

Before Trump’s election, the impact of internet technology generally and Facebook specifically was seen as favoring Democrats. Even a TechCrunch critique of Rosen’s 2012 article about Facebook’s electoral power argued, “the internet inherently advantages liberals because, on average, their greater psychological embrace of disruption leads to more innovation (after all, nearly every major digital breakthrough, from online fundraising to the use of big data, was pioneered by Democrats).”

Certainly, the Obama tech team that I profiled in 2012 thought this was the case. Of course, social media would benefit the (youthful, diverse, internet-savvy) left. And the political bent of just about all Silicon Valley companies runs Democratic. For all the talk about Facebook employees embedding with the Trump campaign, the former CEO of Google, Eric Schmidt, sat with the Obama tech team on Election Day 2012.

In June 2015, The New York Times ran an article about Republicans trying to ramp up their digital campaigns that began like this: “The criticism after the 2012 presidential election was swift and harsh: Democrats were light-years ahead of Republicans when it came to digital strategy and tactics, and Republicans had serious work to do on the technology front if they ever hoped to win back the White House.”

It cited Sasha Issenberg, the most astute reporter on political technology. “The Republicans have a particular challenge,” Issenberg said, “which is, in these areas they don’t have many people with either the hard skills or the experience to go out and take on this type of work.”

University of North Carolina journalism professor Daniel Kreiss wrote a whole (good) book, Prototype Politics, showing that Democrats had an incredible personnel advantage. Drawing on an innovative data set of the professional careers of 629 staffers working in technology on presidential campaigns from 2004 to 2012 and data from interviews with more than 60 party and campaign staffers,” Kriess wrote, “the book details how and explains why the Democrats have invested more in technology, attracted staffers with specialized expertise to work in electoral politics, and founded an array of firms and organizations to diffuse technological innovations down ballot and across election cycles.”

Which is to say: It’s not that no journalists, internet-focused lawyers, or technologists saw Facebook’s looming electoral presence—it was undeniable—but all the evidence pointed to the structural change benefitting Democrats. And let’s just state the obvious: Most reporters and professors are probably about as liberal as your standard Silicon Valley technologist, so this conclusion fit into the comfort zone of those in the field.

By late October, the role that Facebook might be playing in the Trump campaign—and more broadly—was emerging. Joshua Green and Issenberg reported a long feature on the data operation then in motion. The Trump campaign was working to suppress “idealistic white liberals, young women, and African Americans,” and they’d be doing it with targeted, “dark” Facebook ads. These ads are only visible to the buyer, the ad recipients, and Facebook. No one who hasn’t been targeted by then can see them. How was anyone supposed to know what was going on, when the key campaign terrain was literally invisible to outside observers?

Steve Bannon was confident in the operation. “I wouldn’t have come aboard, even for Trump, if I hadn’t known they were building this massive Facebook and data engine,” Bannon told them. “Facebook is what propelled Breitbart to a massive audience. We know its power.”

Issenberg and Green called it “an odd gambit” which had “no scientific basis.” Then again, Trump’s whole campaign had seemed like an odd gambit with no scientific basis. The conventional wisdom was that Trump was going to lose and lose badly. In the days before the election, The Huffington Post’s data team had Clinton’s election probability at 98.3 percent. A member of the team, Ryan Grim, went after Nate Silver for his more conservative probability of 64.7 percent, accusing him of skewing his data for “punditry” reasons. Grim ended his post on the topic, “If you want to put your faith in the numbers, you can relax. She’s got this.”

Narrator: She did not have this.

But the point isn’t that a Republican beat a Democrat. The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.

In the middle of the summer of the election, the former Facebook ad-targeting product manager, Antonio García Martínez, released an autobiography called Chaos Monkeys. He called his colleagues “chaos monkeys,” messing with industry after industry in their company-creating fervor. “The question for society,” he wrote, “is whether it can survive these entrepreneurial chaos monkeys intact, and at what human cost.” This is the real epitaph of the election.

The information systems that people use to process news have been rerouted through Facebook, and in the process, mostly broken and hidden from view. It wasn’t just liberal bias that kept the media from putting everything together. Much of the hundreds of millions of dollars that was spent during the election cycle came in the form of “dark ads.”

The truth is that while many reporters knew some things that were going on on Facebook, no one knew everything that was going on on Facebook, not even Facebook. And so, during the most significant shift in the technology of politics since the television, the first draft of history is filled with undecipherable whorls and empty pages. Meanwhile, the 2018 midterms loom.

Update: After publication, Adam Mosseri, head of News Feed, sent an email
describing some of the work that Facebook is doing in response to the
problems during the election. They include new software and processes
"to stop the spread of misinformation, click-bait and other
problematic content on Facebook."

"The truth is we’ve learned things since the election, and we take our
responsibility to protect the community of people who use Facebook
seriously. As a result, we’ve launched a company-wide effort to
improve the integrity of information on our service," he wrote. "It’s
already translated into new products, new protections, and the
commitment of thousands of new people to enforce our policies and
standards... We know there is a lot more work to do, but I’ve never
seen this company more engaged on a single challenge since I joined
almost 10 years ago."

The Underclass Origins of the Little Black Dress
October 13th, 2017, 10:30 AM

Last week, Sotheby’s auctioned off 140 little black dresses. The event, “Les Petites Robes Noires, 1921–2010,” featured vintage dresses collected by the fashion antiquarian Didier Ludot. A dazzling mix of silk faille, velvet, jersey, and tulle—all in black—cut simple silhouettes. The collection included iconic pieces from Chanel, Givenchy, and Hermès. The more expensive lots fetched over 20,000 euros.

To introduce the collection, Ludot wrote, “Today I pay tribute to the astonishing story of the little black dress and to the designers who wrote its story, a dizzying tale ... from the Roaring Twenties to the new millennium.” But the most astonishing part of the little black dress’s story might be its prologue, the backstory left out of the auction catalogue, the glossy coffee-table books, and the fashion magazines. The most important acolytes of the little black dress were not designers nor aristocrats, but masses of working-class women.

* * *

In October 1926, Vogue featured a sketch of a long-sleeved, calf-length, black sheath dress by a plucky young designer named Coco Chanel. Dubbed “Chanel’s Ford,” the dress was promoted as equivalent in egalitarianism to the Model T.

At the time, Vogue’s editors wrote that Chanel’s little black dress would “become sort of a uniform for all women of taste.” That seems like an astute prediction, in hindsight. But in 1926, the proclamation was tone-deaf at best, as the little black dress was already the actual uniform of many working-class women. The little black dress (or LBD, as it is commonly abbreviated) was a uniform designed to keep certain women in their place. Only later was it co-opted as haute couture for women of taste.

When the lower classes adopt the fashions of the elite, the elites often respond by changing course abruptly—a neckline or a hemline rises or falls dramatically, perhaps, or a voluminous silhouette narrows. But sometimes, rather than quickly changing styles, the upper classes simply wear the clothes the poor have discarded.

For example, as towns populated in the 14th century, a merchant class arose within them. This middle class had some discretionary income, and they spent it on the most conspicuous consumer good: clothing. Finally, they could afford jewel-studded velvets, gold and silver trimmings, brightly colored coats, and sumptuous furs. As the fashion historian Anne Hollander has explained, when the aristocracy couldn’t outlaw or outspend these medieval nouveau riche, they started wearing baggy and threadbare clothing. This new fashion—looking like one had thrown on any old thing—served as a not-so-subtle reminder to the upstarts that, while money could buy clothes, it couldn’t buy class.

Blue jeans offer a more recent example. Jeans began as cheap and durable work pants for miners and farmers. They were the de facto uniform of the rural working class. But once working-class men had access to ready-to-wear trousers, their jeans started showing up on postwar suburban youths, and then in trendy boutiques. Recently, Nordstrom even sold a $425 pair of jeans with fake mud stains—the ultimate blue-collar costume. Once more, the wealthy turn the tables by appropriating the clothing of the poor.

The LBD also finds its origins among the poor. Before the 19th century, domestic servants wore whatever they could—homemade dresses, often, but also their employers’ hand-me-downs. But in the 1860s, the British upper classes required their maids to wear a common uniform: a white mobcap, an apron, and a simple black dress. Soon after, wealthy American and French families followed suit.

Relationships between upper-class women and their servants had changed, becoming “less intimate and more authoritarian,” as the sociologist Diana Crane puts it. At this time, servants ceased to be “the help,” a somewhat collegial characterization, and became known as “domestics.” And domestics wearing upper-class castoffs, especially young and pretty ones, led to embarrassing mix-ups. A caller mistaking the maid for the mistress of the house raised uncomfortable questions about recently erected class barriers.

Cassell’s Household Guide, which billed itself as an encyclopedia of domestic and social economy, summed up the problem like this, circa 1880: “As a general rule, ladies do not like to see their maids dressed in the clothes they themselves have worn—the difference in the social scale of mistress and maid renders this unpleasing.”

But Cassell’s made one exception: “a black or a dark-colored silk.” Previously, a simple black dress meant a wealthy woman was “dressing down.” But by the 19th century, the black dress had become a staple of the lower and middle classes. It was the perfect hand-me-down for the help.

* * *

There was a time when black signified wealth. It was favored by 15th-century Spanish aristocrats and wealthy Dutch merchants. Later, Baldassare Castiglione’s 1528 The Book of the Courtier advised others to follow their lead, to appear above the petty fads of commoners. Black clothing conveyed plainness and piety, for one thing. But it was also incredibly expensive to produce, requiring vast quantities of imported “oak apples”—a bulbous growth left behind on oak leaves from insect egg sacs. By the early 19th century, a newer dye made from logwood and ferrous sulfate made the color cheap to produce. In 1863, an even cheaper synthetic aniline black dye was developed.

By the 1880s, most awkward maid-or-mistress mix-ups had been eliminated thanks to the trusty black dress. But another sort of working-class woman now had the opportunity to dress above her station. Rapid industrialization gave consumers more disposable income, and they wanted places to spend it. More shops opened in urban centers, and cheap labor was needed to staff them. Unmarried young women began pouring into the cities to work as “shopgirls” in dry-good establishments, dress stores, hat and glove shops, and department stores.

The shopgirl enjoyed more freedom and less supervision than domestic servants did. Often, for the first time in her life, she also enjoyed some disposable income of her own. The sewing machine, invented in 1846 and mass-produced in the 1870s, made it easier than ever to imitate these fashions. Mated to the precut paper pattern, devised by the upscale American designer Ellen Curtis Demorest, women could duplicate the latest fashions from Paris with relative ease. And advances in efficiency at textile factories made a wider variety of fabrics and trims available with which to do so.

The new cheap aniline dyes that made the domestic’s black uniform possible also made brightly colored dresses—the vivid scarlets, blues, and greens that were once only for the upper classes—affordable, too. With a few dollars and a few nights’ work, an enterprising shopgirl could create a passable imitation of a dress from the society pages. Or instead, she could shop the sale rack at her place of employment—one of the large, new department stores—and purchase a ready-to-wear dress. She could then alter and trim the dress with lace, sequins, or buttons to make it appear custom-made.

So attired, she might successfully blend in with a store’s clientele—or even outshine them. This wasn’t a desirable state of affairs. Writing in the June 4, 1910, edition of the International Gazette, a Methodist minister urged that “the craze of the shopgirl ... as fashionably attired as the rich woman she waits on had become a menace.” Even earlier, in response to customer complaints, employers had brainstormed ways to neutralize the threat. In 1890, The Sun declared there was a “revolution in dress” underway, “not by the fashionable folk, but by New York’s army of shopgirls.”

In response, many employers began requiring their female employees to dress like domestic servants, in simple black dresses. An 1892 San Francisco Call headline summarized the reaction among the labor pool: “The Shopgirls Hate It.” Sometimes they even went on strike in response. But threatened with termination, most shopgirls buckled, and by the 1890s the little black dress was the required uniform in New York, London, and Paris.

In the summer of 1894, wearing a black dress became a condition of employment for Jersey City telephone operators, too. The “‘hello’ girls,” as they were called, also protested. Newspapers presented their case sympathetically; in 1892, for example, the Reading Times pointed out that the women were opposed not to the dress itself, but “to the idea of showing by their dress that they are working girls.”

For these reasons, the little black dress became a marker of class. When young working-class women complained that being forced into uniform was “inconsistent with our ideals of freedom and independence,” as the The San Francisco Call reported in 1892, they weren’t just complaining about self-expression. Embedded in their ideals was the promise of social mobility.

These women were the fin de siècle equivalent of medieval merchants. They mixed with the upper classes, whether in drawing rooms or on retail shop floors, and they saw what the wealthy wore up close. Thanks to the sewing machine, the paper pattern, and affordable fabrics, the working classes could finally, feasibly, dress like high society—even if they were now only permitted to do so after work hours.

* * *

Society matrons exacted their revenge by dressing like shopgirls and maids, reappropriating their little black dresses for the upper crust.

Lillie Langtry, a famous British beauty who would go on to become a successful actress, conquered London society in 1886 “dressed in a simple little black frock,” as the Emporia Daily News described it. By the early 1900s, socialites who wanted to appear especially youthful and edgy donned little black dresses. The LBD appeared in fashion magazines and society pages decades before Chanel’s dress appeared in Vogue. It was such an established trend by 1915 that even the wife of the U.S. Secretary of the Treasury appeared in public looking “like a college girl, in her short little black dress.”

While Coco Chanel didn’t invent the little black dress, she was astute enough to pick up on the underlying trend that made it popular—la pauvreté de luxe, she called it, or “luxurious poverty.” It was a look reserved exclusively for those who could “afford” to look poor by pretending that they simply couldn’t be bothered with fashion. But while a rich woman might now better blend into the crowd, on closer inspection, there would be some small detail in her seemingly anonymous garment—a certain cut or fabric or label—that acted as a secret handshake for those in the know.

Today, the fashion industry sometimes celebrates the little black dress as an equal-opportunity fashion—versatile, classic, and chic. But this neutral garment was never ideologically neutral—nor was it the democratic creation of a visionary designer. The little black dress marked and mediated social boundaries, a collaboration between cutting-edge technology and age-old class politics.

Today, in addition to little black-dress auctions, there are LBD-themed dinner parties and wine tastings, galas and charity balls. A little black dress has become a shorthand for instant glamour, promising to disguise both figure flaws and mundane lives. This blue-collar costume has successfully crossed over. Women wear little black dresses to feel more like Audrey Hepburn or Princess Diana or even a model in a Robert Palmer music video. But when they do, those women also conjure other predecessors: the women who wore them while they balanced trays, stocked shelves, folded shirts, worked the switchboards, and wrung out the laundry.


This article appears courtesy of Object Lessons.

Octopus-Inspired Material Can Change Its Texture
October 13th, 2017, 10:30 AM

There’s a famous viral video in which a diver slowly swims up to a clump of rock and seaweed, only for part of that clump to turn white, open its eye, and jet away, squirting ink behind it. Few videos so dramatically illustrate an octopus’s mastery of camouflage. But ignore, if you can, the creature’s color, and focus on its texture. As its skin shifts from mottled brown to spectral white, it also goes from lumpy to smooth. In literally the blink of an eye, all those little bumps, spikes, and protuberances disappear.

There are three components to an octopus’s camouflage—color, posture, and texture—and that third aspect is perhaps the least studied. But by drawing inspiration from octopuses’ textural tricks, a team of researchers led by Robert Shepherd, from Cornell University, has created a material that can change its shape in a similar way. From a starting position as a flat sheet, it can quickly mimic a field of stones, or the rosette of a succulent plant.

The project was entirely funded by the U.S. Army Research Office—and it’s not hard to imagine why. There are obvious benefits to having materials that can adaptively hide the outlines of vehicles and robots by breaking up their outlines. But there are other applications beyond military ones, Shepherd says. It might cut down on shipping costs if you could deliver materials as flat sheets, and then readily transform them into three-dimensional shapes—like flat-pack furniture, but without the frustrating assembly. Or, as the roboticist Cecilia Laschi notes in a related commentary, biologists could use camouflaged robots to better spy on animals in their natural habitats.

“I don’t see this being implemented in any real application for quite some time,” says Shepherd. Instead, he mainly wants to learn more about how octopuses themselves work, by attempting to duplicate their biological feats with synthetic materials. “I’m just a big nerd who likes biology,” he says.

Octopuses change their texture using small regions in their skin known as papillae. In these structures, muscle fibers run in a spiderweb pattern, with both radial spokes and concentric circles. When these fibers contract, they draw the soft tissue in the papillae towards the center. And since that tissue doesn’t compress very well, the only direction it can go is up. By arranging the muscle fibers in different patterns, the octopus can turn flat, two-dimensional skin into all manner of three-dimensional shapes, including round bumps, sharp spikes, and even branching structures.

Shepherd’s team—which includes the postdoc James Pikul and the octopus expert Roger Hanlon, who took the famous video at the start of this piece—designed their material to work in a similar way. In place of the octopus’s soft flesh, they used a stretchy silicone sheet. And in place of the muscles, they used a mesh of synthetic fibers that were laid down in concentric rings. Normally, the silicone membrane would balloon outward into a sphere when inflated. But the rings of fibers constrain it, limiting its ability to expand and forcing it to shoot upward instead.

By changing the layout of the fibers, the team could create structures that would inflate into various shapes, like round bumps and pointy cones. Pikul grabbed a stone from a local riverbed and programmed the material to mimic its contours. He set the material to create hierarchical shapes—lumps on lumps. He even programmed it to duplicate the more complicated contours of a field of stones, and a plant with spiraling leaves.

For the moment, the material can only be programmed to mimic one predetermined shape at a time. Still, “the results are impressive,” writes Laschi, and “represent a first step toward more general camouflage abilities.” Indeed, Shepherd is now adapting the material so it can transform more flexibly—just like a real octopus. For example, the team could replace the fixed mesh of fibers with rubber tubes, parts of which could be inflated or deflated at whim. That way, they could change which bits of the surface are flexible, to determine how it will eventually inflate.

Shepherd’s team is just one of many groups who are attempting to build soft robots, which eschew the traditional hard surfaces of most machines in favor of materials that are soft, bouncy, and floppy. Such bots would theoretically be better at navigating tough terrain, resisting shocks and injuries, and even caring for people. Often, these researchers use the octopus as an inspiration. Last year, Harvard researchers 3-D printed a soft, autonomous “octobot” that moved by burning small amounts of onboard fuel, and channeling the resulting gas into its arms. Laschi, meanwhile, has built a robot with soft floppy arms that can wiggle through the water.

The robots are certainly cool, but they’re nowhere near as versatile as the real deal. Shepherd’s material, for example, can change texture about as fast as an actual octopus, but it can only make one rough shape at a time. The animal, meanwhile, can produce far finer undulations in its skin, which are tuned to whatever it sees in its environment. For now, nothing we produce comes anywhere close.

Space Travel's Existential Question
October 12th, 2017, 10:30 AM

The morning of January 27, 1967, Gus Grissom and his Apollo 1 crew put on their flight suits. Foil-clad, with breathing masks, they looked like a mid-century vision of the future brought to you by Reynolds Wrap. The crew of three were to perform a launch test that day, to see what it would be like when they rode a metal cone to space.*

Grissom had been to space before during the Gemini program. That day’s practice wasn’t going great, not like one would hope an actual launch would go. First, the astronauts smelled something rotten in their oxygen. That delayed them by more than an hour. Then, their communications system began to fritz out. Of this, Grissom famously groused, “How are we going to get to the moon if we can't talk between two or three buildings?”

Later, though—into that same microphone and over those same lines—came a single word: “fire.”

It was true: Damaged wires had likely ignited a spark, which fed on the all-oxygen air, growing with its consumption of space-age material—nylon, foam.

The crew tried to escape the capsule. But the hatch wouldn’t open. All three astronauts suffocated inside the vessel that was supposed to carry them—and with them, us—into the future.**

The agency’s two other fatal accidents occurred during the same January week as Apollo’s: Challenger 19 years later, Columbia 17 years after that.*** And just three years ago, the private-spaceflight industry endured its first human loss, when Virgin Galactic’s SpaceShipTwo lost its copilot.****

After each fatal incident, the nation has responded with shock and grief. These explorers—our explorers, Earth’s explorers—paid for that exploration with their lives. Questions arose. Some—How did this happen?—are left to inspectors and investigators. But others—How big a cost are humans willing to bear to leave the planet?—lie in the public domain. The answers seem to have changed throughout the decades, as space travel seemed to evolve from something novel to something routine.

Today, industry and government are both upshifting gears, back into novelty, which means the public’s attitudes toward space travel and its inevitable accidents may return to what they were in NASA’s early, more adventurous days. After decades in a stable and predictable orbit, American spaceflight will return to new vehicles and, maybe, new destinations. The country is deciding which far-off world to point ships toward next, with the moon and Mars the most likely candidates. Private companies are doing the same, and preparing to take high rollers on suborbital romps. And with that leap into the unknown, Americans may become more tolerant of the loss of astronaut life. If they don’t, the government and private industry might not be able to make the leap at all.

We all know people probably will die on these new missions, especially if they become commonplace, as many hope. What no one knows is how we will all respond to those losses.

* * *

When Grissom and his compatriots signed on to the astronaut corps, times were different. They were different: cowboy test pilots—military men, mostly, with that rightest of stuff. Space, and the flight paths to and through it, was basically uncharted. Rockets blew up—a lot—listing sideways, spinning tail over teakettle, exploding heads in the ground like ostriches.

And the astronauts themselves were, for the most part, inured to their mortality. In The Right Stuff, Tom Wolfe repeatedly references the euphemism the early astronauts used to describe fatal crashes: The fliers had “screwed the pooch.” It’s gallows humor: The pilots and astronauts couldn’t completely control their survival—but they could at once face death, distance themselves from it, and use tone to strip it of power.

The public perceived these guys (and they were all guys) as all-American swaggerers, laying their lives on the line for the primacy of the country.

“It was a battle in the Cold War,” says Rand Simberg, author of Safe Is Not An Option: Overcoming the Futile Obsession With Getting Everyone Back Alive That Is Killing Our Expansion Into Space.

The nation, of course, mourned the Apollo 1 crew’s loss—especially given its gruesome nature. But the public and the government were perhaps not surprised, or philosophically disturbed, that people had to die if Americans were to get to the moon in a decade. In an article called “Space Travel: Risk, Ethics, and Governance in Commercial Human Spaceflight,” space strategist Sara Langston looks to other fields to understand attitudes and regulations about space exploration. “In the Encyclopedia of Public Health, [Daniel] Krewski defines acceptable risk as the likelihood of an event whose probability of occurrence is small, whose consequences are so slight, or whose benefits (perceived or real) are so great that individuals or groups in society are willing to take or be subjected to the risk that the event might occur,” she writes. The risk of space accidents, by inference, are subject to the same kind of cost-benefit analysis.

After Apollo, though, came the staid shuttle program. And with it, the tenor of spaceflight changed. The Cold War ended in the ’90s. The spacecraft was called a shuttle. You know, like the bus that takes you to the airport. The Americans had already conquered spaceflight—we got to the moon, which was very hard and very far away and involved orbiting other bodies and sometimes landing. Spinning ellipses around our own planet in a sturdy vehicle? Easy.

The shuttle program left Americans—and perhaps the world—with the false sense that the space-pioneer days were over.

* * *

In technical terms, as the shuttle program developed, people began to think of its flights as operational rather than experimental. In experimental mode, engineers are still figuring the details out, fingering the edges of a craft’s envelope and seeing how hard and fast they can press before they get a cut. In operational mode, though, engineers are supposed to know most everything—the ups, downs, ins, and outs of performance given sundry contingencies.

While the shuttle mostly functioned well, that performance was never actually a given. The vehicle remained, to its last days, experimental, a status reflected in its success/failure rate. “I think people that know our industry kind of understand the edge we're on here, because these systems are tremendously complex,” says David Bearden, general manager of the NASA and civil-space division at the Aerospace Corporation. “If you look back, empirically or historically, at any launch system, about the best you're ever going to get is 1 in 200. On an airline it is a one-in-a-million chance. People who know the industry and know the way those systems operate understand that, I think.”

* * *

I was only six months old when space-shuttle mission STS-51-L sat on the launchpad on January 28, 1986. Aboard were six astronauts and Christa McAuliffe, a teacher from Concord, New Hampshire. The shuttle lifted off on the cold Florida morning. But then, nine miles above Earth’s surface, that seemingly reliable spacecraft broke apart, undone by the uncharacteristic chill at Cape Canaveral that day.

As a Miami Herald Tropic investigation later detailed, the astronauts didn’t die right away: The crew vehicle stayed intact, and continued to go up, before tipping back toward Earth, traveling 12 miles of sky before crashing into the cold ocean water—hard as the cement on the launchpad. The astronauts, the article said, were very likely alive until the very end and might have even been conscious.

Coverage from the days after the tragedy expresses, of course, sadness. “The Shuttle Explosion; Suddenly, Flash of Fear Dashes Watchers’ Hopes,” read a New York Times headline.

“What Went Wrong? Shuttle Loss in Ball of Fire Stuns Nation,” went one from the local Orlando Sentinel.

Both papers, though, declared that the show must go on: “Reflecting on Loss: Welling of Tears, a Desire to Press On,” said the Times.

“Three shuttle veterans lose peers but not faith in program,” said the Sentinel.

The losses, while tragic and (as the Rogers Commission Report would later reveal) avoidable, shouldn’t squash the program. Sacrifices, after all, must be made, for a new program whose utility the nation was still proving.

* * *

I was 17 when NASA lost the space shuttle Columbia in 2003. I’d grown up in Central Florida, not far from Kennedy Space Center. I’d seen almost every shuttle launch in person—with my classmates outside on the sidewalks of my schools, with my sisters in the backyard, and very occasionally from the far side of Cape Canaveral, with my parents. The sonic booms from landings sometimes set off the burglar alarms that hung from our door handles.

But in 2003, I had things to do that didn’t include watching out for spacecraft. I was on my way to a band (marching band, not the cool kind of band) rehearsal session when I heard about Columbia. News of the accident came slow and halting over whatever alt-rock station I was blasting from my Grand Am.

Later, investigators would reveal that a piece of foam had fallen from the shuttle’s wing during launch, leaving a hole that let gas come through when the shuttle re-entered the atmosphere. The shuttle was going Mach 18, 37 miles above the ground, when it broke apart, shaking debris across thousands of square miles.

I remember sitting in my car in a church parking lot, thinking how it couldn’t be real. I remember thinking the radio host didn’t sound like he thought it was real. We’d probably both watched shuttles launch and land safely for much of our lives. To us, the whole program seemed routine—operational. It had moved into that realm of seeming safety, and risks seemed not just less likely but also less justified. And while we always knew this could happen, we never thought it would.

The Columbia disaster represented, unlike the Challenger explosion before it, the start of the finish for the shuttle program. NASA announced its end the very next year. Two strikes, shuttle’s out.

* * *

Sometimes, you hear the phrase “Failure is not an option” associated with NASA. But it was never a slogan at the agency; no one in mission control, that we know of, ever said it, and no manager passed it down. It was just a line in the movie Apollo 13. Failure is always an option: It has to be.

Of course, no one wants a rocket to blow up or a crew capsule to fall to Earth. But to undertake space travel, the undertakers have to acknowledge those possibilities and mitigate the risks. As NASA administrator William Gerstenmaier said in his paper “Staying Hungry: the Interminable Management of Risk in Human Spaceflight,” “We never simply accept it, but NASA, our stakeholders, and the public must acknowledge the risk as we move forward.”

The public, to some extent, also knows that’s the equation. But a 1/200 mission-failure rate means that one doesn’t happen very often, which means that every one comes as a shock.

Still, astronauts’ deaths don’t always cause communal moral outrage. “A particularly risky venture can become socially acceptable in correlation with the value placed on it,” Langston wrote in her risk paper. If people value a space-exploration program, in other words, they’re okay with others risking their lives to push it forward.

Simberg contends that wasn’t fully true with the shuttle, as compared to Apollo—an inspiring and aspiring mission with political importance. “The reason we were so upset about losing these seven astronauts was that what they were doing was kind of trivial,” he says of Columbia.

We don’t always demand, though, that people be doing something Valuable that Benefits Humanity to let them risk their lives (and there were lots of ways the shuttle and, in particular, its trips to the always-peaceful International Space Station did benefit humanity). About 1 percent of people who try to climb Mount Everest historically die in the attempt, for example. And this despite the fact that topping Everest is not exactly exploration, with its $40,000 price tag and Sherpa guides and commercial expeditions. And it’s been done before.

Shuttle astronauts, meanwhile, have a 1.5 percent chance of dying on a given trip to space. And trying to keep them at least as safe as that—or safer—means the agency can’t go as boldly as private industry can.

* * *

The major players in the crewed-commercial space are SpaceX, which wants to eventually build a martian colony; Blue Origin, whose Jeff Bezos envisions an outer space full of industry and long-term habitation; and Virgin Galactic, which wants to democratize access to space closer to home, with a carrier plane that rides up to 50,000 feet, then travels up on its own and glides back down at the behest and guidance of its pilots.

On October 31, 2014, Virgin Galactic paid a human price for that system. During that October test flight, copilot Michael Alsbury unlocked SpaceShip’s feathering system, which changes the shape of the plane to aid reentry, early. The wind then pushed the system open, and the vehicle destabilized. While pilot Peter Siebold parachuted to safety, Alsbury remained with the ship, and died on impact.

After the accident, Virgin allowed its already-booked customers to back out, but just 3 percent did.

SpaceX, meanwhile, has had its own explosive setbacks, and yet the company and leader Elon Musk still remain the industry’s darlings. SpaceX blew up an uncrewed Falcon 9 rocket in September 2016. In June of the year before, the company lost another Falcon that was supposed to resupply the Space Station. In test launches and landings of its reusable rockets, SpaceX has also had a vessel tip over into the ocean and explode (January 2016); crash into a ship (January 2015); and land “too hard for survival” (April 2015).

Based on this, the NewSpace industry seems to exist firmly in the experimental phase. But, more than that, the public seems to know—and accept—that status. “You understand that you're in a test-pilot phase,” says Bearden. “The public can process that and say, ‘That's not me. By the time I fly, they're going to have worked it out.’”

The public permits mistakes for the private space companies—because they produce rockets and results on non-geologic timescales, and lay out visions like “you can go to space” and “you can have a house on Mars.”

The FAA, which regulates commercial space activity via the Human Spaceflight Requirements for Crew and Spaceflight Participants, is also relatively forgiving. “Congress mandated these regulations in the Commercial Space-Launch Amendments Act of 2004,” says the FAA’s description of this law. “Recognizing that this is a fledgling industry, the law required a phased approach in regulating commercial human spaceflight, with regulatory standards evolving as the industry matures,” attempting not to crush innovation with regulation. Flight providers do, though, have to get extremely informed consent from would-be astronauts.

NASA recognizes the value in this model, and in its different posture toward risk. The agency has teamed up with such space companies—letting them, among other things, shuttle cargo and crew to low Earth orbit. NASA no longer has to be all things to all people and missions, and can let those experimental upstarts do a little legwork.

The agency may also see, though, that the public perceives New Space cadets as pioneers—a lens through which they don’t see NASA like they used to—and so forgive their mistakes, tallying them as the cost of innovation, rather than a cost not worth bearing. And perhaps the agency hopes the same thing for itself, as it turned those duties over to private companies so that it can focus on its own bold goals, its own new risky, experimental phase of operations with both the costs and the benefits that come with that.


* This article originally stated that there were four crew members aboard Apollo 1.

** This article originally misstated the cause of death for the Apollo 1 crew.

*** This article originally implied that the Columbia disaster occurred 36 years after the Challenger explosion.

**** This article originally stated that the Virgin Galactic crash resulted in the death of the craft’s pilot. We regret the errors.

Does Technology Need to Be Ethical?
October 12th, 2017, 10:30 AM

“The average citizen is starting to feel more and more like, ‘I’m not sure that I feel good about the way technology is interacting with my life,’” says Anil Dash, an entrepreneur, activist, and the CEO of Fog Creek Software, in an interview recorded at the Aspen Ideas Festival. As trust in the tech world continues to erode due to increased vulnerability to hacking and the proliferation of misinformation across Google and Facebook, Dash believes tech giants have a responsibility to society to be ethical. Says Dash: “If you’re the CEO of a major tech company, you are a political figure whether you choose to be or not.”

How Da Vinci 'Augmented Reality'—More Than 500 Years Ago
October 12th, 2017, 10:30 AM

We may think of Leonardo Da Vinci as an artist, but he was also a scientist. By incorporating anatomy, chemistry, and optics into his artistic process, Da Vinci created an augmented reality experience centuries before the concept even existed. This video details how Da Vinci made the Mona Lisa interactive using innovative painting techniques and the physiology of the human eye.

Read more about the science behind the Mona Lisa on The Atlantic.

Google X and the Science of Radical Creativity
October 12th, 2017, 10:30 AM

I. The Question

A snake-robot designer, a balloon scientist, a liquid-crystals technologist, an extradimensional physicist, a psychology geek, an electronic-materials wrangler, and a journalist walk into a room. The journalist turns to the assembled crowd and asks: Should we build houses on the ocean?

Listen to the audio version of this article:Feature stories, read aloud: download the Audm app for your iPhone.

The setting is X, the so-called moonshot factory at Alphabet, the parent company of Google. And the scene is not the beginning of some elaborate joke. The people in this room have a particular talent: They dream up far-out answers to crucial problems. The dearth of housing in crowded and productive coastal cities is a crucial problem. Oceanic residences are, well, far-out. At the group’s invitation, I was proposing my own moonshot idea, despite deep fear that the group would mock it.

Like a think-tank panel with the instincts of an improv troupe, the group sprang into an interrogative frenzy. “What are the specific economic benefits of increasing housing supply?” the liquid-crystals guy asked. “Isn’t the real problem that transportation infrastructure is so expensive?” the balloon scientist said. “How sure are we that living in densely built cities makes us happier?” the extradimensional physicist wondered. Over the course of an hour, the conversation turned to the ergonomics of Tokyo’s high-speed trains and then to Americans’ cultural preference for suburbs. Members of the team discussed commonsense solutions to urban density, such as more money for transit, and eccentric ideas, such as acoustic technology to make apartments soundproof and self-driving housing units that could park on top of one another in a city center. At one point, teleportation enjoyed a brief hearing.

X is perhaps the only enterprise on the planet where regular investigation into the absurd is not just permitted but encouraged, and even required. X has quietly looked into space elevators and cold fusion. It has tried, and abandoned, projects to design hoverboards with magnetic levitation and to make affordable fuel from seawater. It has tried—and succeeded, in varying measures—to build self-driving cars, make drones that deliver aerodynamic packages, and design contact lenses that measure glucose levels in a diabetic person’s tears.

These ideas might sound too random to contain a unifying principle. But they do. Each X idea adheres to a simple three-part formula. First, it must address a huge problem; second, it must propose a radical solution; third, it must employ a relatively feasible technology. In other words, any idea can be a moonshot—unless it’s frivolous, small-bore, or impossible.

The purpose of X is not to solve Google’s problems; thousands of people are already doing that. Nor is its mission philanthropic. Instead X exists, ultimately, to create world-changing companies that could eventually become the next Google. The enterprise considers more than 100 ideas each year, in areas ranging from clean energy to artificial intelligence. But only a tiny percentage become “projects,” with full-time staff working on them. It’s too soon to know whether many (or any) of these shots will reach the moon: X was formed in 2010, and its projects take years; critics note a shortage of revenue to date. But several projects—most notably Waymo, its self-driving-car company, recently valued at $70 billion by one Wall Street firm—look like they may.

X is extremely secretive. The company won’t share its budget or staff numbers with investors, and it’s typically off-limits to journalists as well. But this summer, the organization let me spend several days talking with more than a dozen of its scientists, engineers, and thinkers. I asked to propose my own absurd idea in order to better understand the creative philosophy that undergirds its approach. That is how I wound up in a room debating a physicist and a roboticist about apartments floating off the coast of San Francisco.

I’d expected the team at X to sketch some floating houses on a whiteboard, or discuss ways to connect an ocean suburb to a city center, or just inform me that the idea was terrible. I was wrong. The table never once mentioned the words floating or ocean. My pitch merely inspired an inquiry into the purpose of housing and the shortfalls of U.S. infrastructure. It was my first lesson in radical creativity. Moonshots don’t begin with brainstorming clever answers. They start with the hard work of finding the right questions.

Creativity is an old practice but a new science. It was only in 1950 that J. P. Guilford, a renowned psychologist at the University of Southern California, introduced the discipline of creativity research in a major speech to the American Psychological Association. “I discuss the subject of creativity with considerable hesitation,” he began, “for it represents an area in which psychologists generally, whether they be angels or not, have feared to tread.” It was an auspicious time to investigate the subject of human ingenuity, particularly on the West Coast. In the next decade, the apricot farmland south of San Francisco took its first big steps toward becoming Silicon Valley.

Yet in the past 60 years, something strange has happened. As the academic study of creativity has bloomed, several key indicators of the country’s creative power have turned downward, some steeply. Entrepreneurship may have grown as a status symbol, but America’s start-up rate has been falling for decades. The label innovation may have spread like ragweed to cover every minuscule tweak of a soda can or a toothpaste flavor, but the rate of productivity growth has been mostly declining since the 1970s. Even Silicon Valley itself, an economic powerhouse, has come under fierce criticism for devoting its considerable talents to trivial problems, like making juice or hailing a freelancer to pick up your laundry.

Breakthrough technology results from two distinct activities that generally require different environments—invention and innovation. Invention is typically the work of scientists and researchers in laboratories, like the transistor, developed at Bell Laboratories in the 1940s. Innovation is an invention put to commercial use, like the transistor radio, sold by Texas Instruments in the 1950s. Seldom do the two activities occur successfully under the same roof. They tend to thrive in opposite conditions; while competition and consumer choice encourage innovation, invention has historically prospered in labs that are insulated from the pressure to generate profit.

The United States’ worst deficit today is not of incremental innovation but of breakthrough invention. Research-and-development spending has declined by two-thirds as a share of the federal budget since the 1960s. The great corporate research labs of the mid-20th century, such as Bell Labs and Xerox Palo Alto Research Center (parc), have shrunk and reined in their ambitions. America’s withdrawal from moonshots started with the decline in federal investment in basic science. Allowing well-funded and diverse teams to try to solve big problems is what gave us the nuclear age, the transistor, the computer, and the internet. Today, the U.S. is neglecting to plant the seeds of this kind of ambitious research, while complaining about the harvest.

No one at X would claim that it is on the verge of unleashing the next platform technology, like electricity or the internet—an invention that could lift an entire economy. Nor is the company’s specialty the kind of basic science that typically thrives at research universities. But what X is attempting is nonetheless audacious. It is investing in both invention and innovation. Its founders hope to demystify and routinize the entire process of making a technological breakthrough—to nurture each moonshot, from question to idea to discovery to product—and, in so doing, to write an operator’s manual for radical creativity.

II. The Inkling

Inside X’s Palo Alto headquarters, artifacts of projects and prototypes hang on the walls, as they might in a museum—an exhibition of alternative futures. A self-driving car is parked in the lobby. Drones shaped like Jedi starfighters are suspended from the rafters. Inside a three-story atrium, a large screen renders visitors as autonomous vehicles would see them—pointillist ghosts moving through a rainbow-colored grid. It looks like Seurat tried to paint an Atari game.

Just beyond the drones, I find Astro Teller. He is the leader of X, whose job title, captain of moonshots, is of a piece with his piratical, if perhaps self-conscious, charisma. He has a long black ponytail and silver goatee, and is wearing a long-sleeved T‑shirt, dark jeans, and large black Rollerblades. Fresh off an afternoon skate?, I ask. “Actually, I wear these around the office about 98 percent of the time,” he says. I glance at an X publicist to see whether he’s serious. Her expression says: Of course he is.

Teller, 47, descends from a formidable line of thinkers. His grandfathers were Edward Teller, the father of the hydrogen bomb, and Gérard Debreu, a mathematician who won a Nobel Prize in Economics. With a doctorate in artificial intelligence from Carnegie Mellon, Teller is an entrepreneur, a two-time novelist, and the author of a nonfiction book, Sacred Cows, on marriage and divorce—co-written with his second wife. His nickname, Astro, though painfully on the nose for the leader of a moonshot factory, was bestowed upon him in high school, by friends who said his flattop haircut resembled Astroturf. (His given name is Eric.)

Astro Teller, the “captain of moonshots” at X, descends from Edward Teller, the father of the hydrogen bomb. (Justin Kaneps)

In 2010, Teller joined a nascent division within Google that would use the company’s ample profits to explore bold new ideas, which Teller called “moonshots.” The name X was chosen as a purposeful placeholder—as in, We’ll solve for that later. The one clear directive was what X would not do. While almost every corporate research lab tries to improve the core product of the mother ship, X was conceived as a sort of anti–corporate research lab; its job was to solve big challenges anywhere except in Google’s core business.

When Teller took the helm of X (which is now a company, like Google, within Alphabet), he devised the three-part formula for an ideal moonshot project: an important question, a radical solution, and a feasible path to get there. The proposals could come from anywhere, including X employees, Google executives, and outside academics. But grand notions are cheap and abundant—especially in Silicon Valley, where world-saving claims are a debased currency—and actual breakthroughs are rare. So the first thing Teller needed to build was a way to kill all but the most promising ideas. He assembled a team of diverse experts, a kind of Justice League of nerds, to process hundreds of proposals quickly and promote only those with the right balance of audacity and achievability. He called it the Rapid Evaluation team.

In the landscape of ideas, Rapid Eval members aren’t vertical drillers but rather oil scouts, skillful in surveying the terrain for signs of pay dirt. You might say it’s Rapid Eval’s job to apply a kind of future-perfect analysis to every potential project: If this idea succeeds, what will have been the challenges? If it fails, what will have been the reasons?

The art of predicting which ideas will become hits is a popular subject of study among organizational psychologists. In academic jargon, it is sometimes known as “creative forecasting.” But what sorts of teams are best at forecasting the most-successful creations? Justin Berg, a professor at the Stanford Graduate School of Business, set out to answer this question in a 2016 study focused on, of all things, circus performances.

Berg found that there are two kinds of circus professionals: creators who imagine new acts, and managers who evaluate them. He collected more than 150 circus-performance videos and asked more than 300 circus creators and managers to watch them and predict the performers’ success with an audience. Then he compared their reactions with those of more than 13,000 ordinary viewers.

Creators, Berg found, were too enamored of their own concepts. But managers were too dismissive of truly novel acts. The most effective evaluation team, Berg concluded, was a group of creators. “A solitary creator might fall in love with weird stuff that isn’t broadly popular,” he told me, “but a panel of judges will reject anything too new. The ideal mix is a panel of creators who are also judges, like the teams at X.” The best evaluators are like player-coaches—they create, then manage, and then return to creating. “They’re hybrids,” Berg said.

Rich DeVaul is a hybrid. He is the leader of the Rapid Eval team but he has also, like many members, devoted himself to major projects at X. He has looked into the feasibility of space elevators that could transport cargo to satellites without a rocket ship and modeled airships that might transport goods and people in parts of the world without efficient roads, all without ever touching the ground. “At one point, I got really interested in cold fusion,” he said. “Because why not?”

One of DeVaul’s most consuming obsessions has been to connect the roughly 4 billion people around the world who don’t have access to high-speed internet. He considers the internet the steam engine or electrical grid of the 21st century—the platform technology for a long wave of economic development. DeVaul first proposed building a cheap, solar-powered tablet computer. But the Rapid Eval team suggested that he was aiming at the wrong target. The world’s biggest need wasn’t hardware but access. Cables and towers were too expensive to build in mountains and jungles, and earthbound towers don’t send signals widely enough to make sense for poor, sparsely populated areas. The cost of satellites made those, too, prohibitive for poor areas. DeVaul needed something inexpensive that could live in the airspace between existing towers and satellites. His answer: balloons. Really big balloons.

The idea struck more than a few people as ridiculous. “I thought I was going to be able to prove it impossible really quickly,” said Cliff L. Biffle, a computer scientist and Rapid Eval manager who has been at X for six years. “But I totally failed. It was really annoying.” Here was an idea, the team concluded, that could actually work: a network of balloons, equipped with computers powered by solar energy, floating 13 miles above the Earth, distributing internet to the world. The cause was huge; the solution was radical; the technology was feasible. They gave it a name: Project Loon.

Rich DeVaul, a co-founder of Project Loon, which seeks to provide internet access to remote places using a fleet of balloons (Julia Wang / X)

At first, Loon team members thought the hardest problem would be sustaining an internet connection between the ground and a balloon. DeVaul and Biffle bought several helium balloons, attached little Wi‑Fi devices to them, and let them go at Dinosaur Point, in the Central Valley. As the balloons sluiced through the jet stream, DeVaul and his colleagues chased them down in a Subaru Forester rigged with directional antennae to catch the signal. They drove like madmen along the San Luis Reservoir as the balloons soared into the stratosphere. To their astonishment, the internet connection held. DeVaul was ecstatic, his steampunk vision of broadband-by-balloon seemingly within grasp. “I thought, The rest is just ballooning!” he said. “That’s not rocket science.”

He was right, in a way. Ballooning of the sort his team imagined isn’t rocket science. It’s harder.

Let’s start with the balloons. Each one, flattened, is the size of a tennis court, made of stitched-together pieces of polyethylene. At the bottom of the balloon hangs a small, lightweight computer with the same technology you would find at the top of a cell tower, with transceivers to beam internet signals and get information from ground stations. The computer system is powered by solar panels. The balloon is designed to float 70,000 feet above the Earth for months in one stretch. The next time you are at cruising altitude in an airplane, imagine seeing a balloon as far above you as the Earth is far below.

Cliff L. Biffle, a member of X’s Rapid Eval team, which seeks to kill, as quickly as possible, ideas that will ultimately fail (Justin Kaneps)

The balloons have to survive in what is essentially an alien environment. At night, the temperature plunges to 80 degrees below zero Celsius, colder than your average evening on Mars. By day, the sun could fry a typical computer, and the air is too thin for a fan to cool the motherboard. So Loon engineers store the computer system in a specially constructed box—the original was a Styrofoam beer cooler—coated with reflective white paint.

The computer system, guided by an earthbound data center, can give the balloon directions (“Go northeast to Lima!”), but the stratosphere is not an orderly street grid in which traffic flows in predictable directions. It takes its name from the many strata, or layers, of air temperatures and wind currents. It’s difficult to predict which way the stratosphere’s winds will blow. To navigate above a particular town—say, Lima—the balloon cannot just pick any altitude and cruise. It must dive and ascend thousands of feet, sampling the gusts of various altitudes, until it finds one that is pointing in just the right direction. So Loon uses a team of balloons to provide constant coverage to a larger area. As one floats off, another moves in to take its place.

Four years after Loon’s first real test, in New Zealand, the project is in talks with telecommunications companies around the world, especially where cell towers are hard to build, like the dense jungles and mountains of Peru. Today a network of broadband-beaming balloons floats above rural areas outside of Lima, delivering the internet through the provider Telefónica.

Improving internet access in Latin America, Africa, and Asia to levels now seen in developed countries would generate more than $2 trillion in additional GDP, according to a recent study by Deloitte. Loon is still far from its global vision, but capturing even a sliver of one percentage point of that growth would make it a multibillion-dollar business.

III. The Fail

Astro Teller likes to recount an allegorical tale of a firm that has to get a monkey to stand on top of a 10-foot pedestal and recite passages from Shakespeare. Where would you begin? he asks. To show off early progress to bosses and investors, many people would start with the pedestal. That’s the worst possible choice, Teller says. “You can always build the pedestal. All of the risk and the learning comes from the extremely hard work of first training the monkey.” An X saying is “#MonkeyFirst”—yes, with the hashtag—and it means “do the hardest thing first.”

But most people don’t want to do the hardest thing first. Most people want to go to work and get high fives and backslaps. Despite the conference-keynote pabulum about failure (“Fail fast! Fail often!”), the truth is that, financially and psychologically, failure sucks. In most companies, projects that don’t work out are stigmatized, and their staffs are fired. That’s as true in many parts of Silicon Valley as it is anywhere else. X may initially seem like a paradise of curiosity and carefree tinkering, a world apart from the drudgery required at a public company facing the drumbeat of earnings reports. But it’s also a place immersed in failure. Most green-lit Rapid Eval projects are unsuccessful, even after weeks, months, or years of one little failure after another.

At X, Teller and his deputies have had to build a unique emotional climate, where people are excited to take big risks despite the inevitability of, as Teller delicately puts it, “falling flat on their face.” X employees like to bring up the concept of “psychological safety.” I initially winced when I heard the term, which sounded like New Age fluff. But it turns out to be an important element of X’s culture, the engineering of which has been nearly as deliberate as that of, say, Loon’s balloons.

Kathy Hannun told me of her initial anxiety, as the youngest employee at X, when she joined in the spring of 2012. On her first day, she was pulled into a meeting with Teller and other X executives where, by her account, she stammered and flubbed several comments for fear of appearing out of her depth. But everyone, at times, is out of his or her depth at X. After the meeting, Teller told her not to worry about making stupid comments or asking ignorant questions. He would not turn on her, he said.

Hannun now serves as the CEO of Dandelion, an X spin-off that uses geothermal technology to provide homes in New York State with a renewable source of heating, cooling, and hot water. “I did my fair share of unwise and inexperienced things over the years, but Astro was true to his word,” she told me. The culture, she said, walked a line between patience and high expectations, with each quality tempering the other.

X encourages its most successful employees to talk about the winding and potholed road to breakthrough invention. This spring, André Prager, a German mechanical engineer, delivered a 25-minute presentation on this topic at a company meeting, joined by members of X’s drone team, called Project Wing. He spoke about his work on the project, which was founded on the idea that drones could be significant players in the burgeoning delivery economy. The idea had its drawbacks: Dogs may attack a drone that lands, and elevated platforms are expensive, so Wing’s engineers needed a no-landing/no-infrastructure solution. After sifting through hundreds of ideas, they settled on an automatic winching system that lowered and raised a specialized spherical hook—one that can’t catch on clothing or tree branches or anything else—to which a package could be attached.

In their address, Prager and his team spent less time on their breakthroughs than on the many failed cardboard models they discarded along the way. The lesson they and Teller wanted to communicate is that simplicity, a goal of every product, is in fact extremely complicated to design. “The best designs—a bicycle, a paper clip—you look and think, Well of course, it always had to look like that,” Prager told me. “But the less design you see, the more work was needed to get there.” X tries to celebrate the long journey of high-risk experimentation, whether it leads to the simplicity of a fine invention or the mess of failure.

Because the latter possibility is high, the company has also created financial rewards for team members who shut down projects that are likely to fail. For several years, Hannun led another group, named Foghorn, which developed technology to turn seawater into affordable fuel. The team appeared to be on track, until the price of oil collapsed in 2015 and its members forecast that their fuel couldn’t compete with regular gasoline soon enough to justify keeping the project alive. In 2016, they submitted a detailed report explaining that, despite advancing the science, their technology would not be economically viable in the near future. They argued for the project to be shut down. For this, the entire team received a bonus.

Some might consider these so-called failure bonuses to be a bad incentive. But Teller says it’s just smart business. The worst scenario for X is for many doomed projects to languish for years in purgatory, sucking up staff and resources. It is cheaper to reward employees who can say, “We tried our best, and this just didn’t work out.”

Recently, X has gone further in accommodating and celebrating failure. In the summer of 2016, the head of diversity and inclusion, a Puerto Rican–born woman named Gina Rudan, spoke with several X employees whose projects were stuck or shut down and found that they were carrying heavy emotional baggage. She approached X’s leadership with an idea based on Mexico’s Día de los Muertos, or Day of the Dead. She suggested that the company hold an annual celebration to share stories of pain from defunct projects. Last November, X employees gathered in the main hall to hear testimonials, not only about failed experiments but also about failed relationships, family deaths, and personal tragedies. They placed old prototypes and family mementos on a small altar. It was, several X employees told me, a resoundingly successful and deeply emotional event.

No failure at X has been more public than Google Glass, the infamous head-mounted wearable computer that resembled a pair of spectacles. Glass was meant to be the world’s next great hardware evolution after the smartphone. Even more quixotically, its hands-free technology was billed as a way to emancipate people from their screens, making technology a seamless feature of the natural world. (To critics, it was a ploy to eventually push Google ads as close to people’s corneas as possible.) After a dazzling launch in 2013 that included a 12-page spread in Vogue, consumers roundly dissed the product as buggy, creepy, and pointless. The last of its dwindling advocates were branded “glassholes.”

A 2013 public demonstration of Google Glass, X’s most infamous failure to date (Photography Inc. / Corbis / Getty)

I found that X employees were eager to talk about the lessons they drew from Glass’s failure. Two lessons, in particular, kept coming up in our conversations. First, they said, Glass flopped not because it was a bad consumer product but because it wasn’t a consumer product at all. The engineering team at X had wanted to send Glass prototypes to a few thousand tech nerds to get feedback. But as buzz about Glass grew, Google, led by its gung-ho co-founder Sergey Brin, pushed for a larger publicity tour—including a ted Talk and a fashion show with Diane von Furstenberg. Photographers captured Glass on the faces of some of the world’s biggest celebrities, including Beyoncé and Prince Charles, and Google seemed to embrace the publicity. At least implicitly, Google promised a product. It mailed a prototype. (Four years later, Glass has reemerged as a tool for factory workers, the same group that showed the most enthusiasm for the initial design.)

But Teller and others also saw Glass’s failure as representative of a larger structural flaw within X. It had no systemic way of turning science projects into businesses, or at least it hadn’t put enough thought into that part of the process. So X created a new stage, called Foundry, to serve as a kind of incubator for scientific breakthroughs as its team develops a business model. The division is led by Obi Felten, a Google veteran whose title says it all: head of getting moonshots ready for contact with the real world.

Obi Felten leads Foundry, a division of X tasked with turning scientific breakthroughs into marketable products. (Justin Kaneps)

“When I came here,” Felten told me, “X was this amazing place full of deep, deep, deep geeks, most of whom had never taken a product out into the world.” In Foundry, the geeks team up with former entrepreneurs, business strategists from firms like McKinsey, designers, and user-experience researchers.

One of the latest breakthroughs to enter Foundry is an energy project code-named Malta, which is an answer to one of the planet’s most existential questions: Can wind and solar energy replace coal? The advent of renewable-energy sources is encouraging, since three-quarters of global carbon emissions come from fossil fuels. But there is no clean, cost-effective, grid-scale technology for storing wind or solar energy for those times when the air is calm or the sky is dark. Malta has found a way to do it using molten salt. In Malta’s system, power from a wind farm would be converted into extremely hot and extremely cold thermal energy. The warmth would be stored in molten salt, while the cold energy (known internally as “coolth”) would live in a chilly liquid. A heat engine would then recombine the warmth and coolth as needed, converting them into electric energy that would be sent back out to the grid. X believes that salt-based thermal storage could be considerably cheaper than any other grid-scale storage technology in the world.

The current team leader is Raj B. Apte, an ebullient entrepreneur and engineer who made his way to X through parc. He compares the project’s recent transition to Foundry to “when you go from a university lab to a start-up with an A-class venture capitalist.” Now that Apte and his team have established that the technology is viable, they need an industry partner to build the first power plant. “When I started Malta, we very quickly decided that somewhere around this point would be the best time to fire me,” Apte told me, laughing. “I’m a display engineer who knows about hetero-doped polysilicon diodes, not a mechanical engineer with a background in power plants.” Apte won’t leave X, though. Instead he will be converted into a member of the Rapid Eval team, where X will store his creative energies until they are deployed to another project.

Raj B. Apte, the leader of Project Malta, which seeks to store wind power in molten salt (Justin Kaneps)

Thinking about the creation of Foundry, it occurred to me that X is less a moonshot factory than a moonshot studio. Like MGM in the 1940s, it employs a wide array of talent, generates a bunch of ideas, kills the weak ones, nurtures the survivors for years, and brings the most-promising products to audiences—and then keeps as much of the talent around as possible for the next feature.

IV. The Invention

Technology is feral. It takes teamwork to wrangle it and patience to master it, and yet even in the best of circumstances, it runs away. That’s why getting invention right is hard, and getting commercial innovation right is hard, and doing both together—as X hopes to—is practically impossible. That is certainly the lesson from the two ancestors of X: Bell Laboratories and Xerox parc. Bell Labs was the preeminent science organization in the world during the middle of the 20th century. From 1940 to 1970, it gave birth to the solar cell, the laser, and some 9 percent of the nation’s new communications patents. But it never merchandised the vast majority of its inventions. As the research arm of AT&T’s government-sanctioned monopoly, it was legally barred from entering markets outside of telephony.

In the 1970s, just as the golden age at Bell Labs was ending, its intellectual heir was rising in the West. At Xerox parc, now known as just parc, another sundry band of scientists and engineers laid the foundation for personal computing. Just about everything one associates with a modern computer—the mouse, the cursor, applications opening in windows—was pioneered decades ago at parc. But Xerox failed to appreciate the tens of trillions of dollars locked within its breakthroughs. In what is now Silicon Valley lore, it was a 20‑something entrepreneur named Steve Jobs who in 1979 glimpsed parc’s computer-mouse prototype and realized that, with a bit of tinkering, he could make it an integral part of the desktop computer.

Innovators are typically the heroes of the story of technological progress. After all, their names and logos are the ones in our homes and in our pockets. Inventors are the anonymous geeks whose names lurk in the footnotes (except, perhaps, for rare crossover polymaths such as Thomas Edison and Elon Musk). Given our modern obsession with billion-dollar start-ups and mega-rich entrepreneurs, we have perhaps forgotten the essential role of inventors and scientific invention.

A workshop at X where prototypes are created (Justin Kaneps)

The decline in U.S. productivity growth since the 1970s puzzles economists; potential explanations range from an aging workforce to the rise of new monopolies. But John Fernald, an economist at the Federal Reserve, says we can’t rule out a drought of breakthrough inventions. He points out that the notable exception to the post-1970 decline in productivity occurred from 1995 to 2004, when businesses throughout the economy finally figured out information technology and the internet. “It’s possible that productivity took off, and then slowed down, because we picked all the low-hanging fruit from the information-technology wave,” Fernald told me.

The U.S. economy continues to reap the benefits of IT breakthroughs, some of which are now almost 50 years old. But where will the next brilliant technology shock come from? As total federal R&D spending has declined—from nearly 12 percent of the budget in the 1960s to 4 percent today—some analysts have argued that corporate America has picked up the slack. But public companies don’t really invest in experimental research; their R&D is much more D than R. A 2015 study from Duke University found that since 1980, there has been a “shift away from scientific research by large corporations”—the triumph of short-term innovation over long-term invention.

The decline of scientific research in America has serious implications. In 2015, MIT published a devastating report on the landmark scientific achievements of the previous year, including the first spacecraft landing on a comet, the discovery of the Higgs boson particle, and the creation of the world’s fastest supercomputer. None of these was an American-led accomplishment. The first two were the products of a 10-year European-led consortium. The supercomputer was built in China.

As the MIT researchers pointed out, many of the commercial breakthroughs of the past few years have depended on inventions that occurred decades ago, and most of those were the results of government investment. From 2012 to 2016, the U.S. was the world’s leading oil producer. This was largely thanks to hydraulic fracturing experiments, or fracking, which emerged from federally funded research into drilling technology after the 1970s oil crisis. The recent surge in new cancer drugs and therapies can be traced back to the War on Cancer announced in 1971. But the report pointed to more than a dozen research areas where the United States is falling behind, including robotics, batteries, and synthetic biology. “As competitive pressures have increased, basic research has essentially disappeared from U.S. companies,” the authors wrote.

It is in danger of disappearing from the federal government as well. The White House budget this year proposed cutting funding for the National Institutes of Health, the crown jewel of U.S. biomedical research, by $5.8 billion, or 18 percent. It proposed slashing funding for disease research, wiping out federal climate-change science, and eliminating the Energy Department’s celebrated research division, arpa-e.

The Trump administration’s thesis seems to be that the private sector is better positioned to finance disruptive technology. But this view is ahistorical. Almost every ingredient of the internet age came from government-funded scientists or research labs purposefully detached from the vagaries of the free market. The transistor, the fundamental unit of electronics hardware, was invented at Bell Labs, inside a government-sanctioned monopoly. The first model of the internet was developed at the government’s Advanced Research Projects Agency, now called darpa. In the 1970s, several of the agency’s scientists took their vision of computers connected through a worldwide network to Xerox parc.

“There is still a huge misconception today that big leaps in technology come from companies racing to make money, but they do not,” says Jon Gertner, the author of The Idea Factory, a history of Bell Labs. “Companies are really good at combining existing breakthroughs in ways that consumers like. But the breakthroughs come from patient and curious scientists, not the rush to market.” In this regard, X’s methodical approach to invention, while it might invite sneering from judgmental critics and profit-hungry investors, is one of its most admirable qualities. Its pace and its patience are of another era.

V. The Question, Again

Any successful organization working on highly risky projects has five essential features, according to Teresa Amabile, a professor at Harvard Business School and a co-author of The Progress Principle. The first is “failure value,” a recognition that mistakes are opportunities to learn. The second is psychological safety, the concept so many X employees mentioned. The third is multiple diversities—of backgrounds, perspectives, and cognitive styles. The fourth, and perhaps most complicated, is a focus on refining questions, not just on answers; on routinely stepping back to ask whether the problems the organization is trying to solve are the most important ones. These are features that X has self-consciously built into its culture.

The fifth feature is the only one that X does not control: financial and operational autonomy from corporate headquarters. That leads to an inevitable question: How long will Alphabet support X if X fails to build the next Google?

The co-founders of Google, Brin and Larry Page, clearly have a deep fondness for X. Page once said that one of his childhood heroes was Nikola Tesla, the polymath Serbian American whose experiments paved the way for air-conditioning and remote controls. “He was one of the greatest inventors, but it’s a sad, sad story,” Page said in a 2008 interview. “He couldn’t commercialize anything, he could barely fund his own research. You’d want to be more like Edison … You’ve got to actually get [your invention] into the world; you’ve got to produce, make money doing it.”

Nine years later, this story seems like an ominous critique of X, whose dearth of revenue makes it more like Tesla’s laboratory than Edison’s factory. Indeed, the most common critique of X that I heard from entrepreneurs and academics in the Valley is that the company’s prodigious investment has yet to produce a blockbuster.

Several X experiments have been profitably incorporated into Google already. X’s research into artificial intelligence, nicknamed Brain, is now powering some Google products, like its search and translation software. And an imminent blockbuster may be hiding in plain sight: In May, Morgan Stanley analysts told investors that Waymo, the self-driving-car company that incubated at X for seven years, is worth $70 billion, more than the market cap of Ford or GM. The future of self-driving cars—how they will work, and who exactly will own them—is uncertain. But the global car market generates more than $1 trillion in sales each year, and Waymo’s is perhaps the most advanced autonomous-vehicle technology in the world.

What’s more, X may benefit its parent company in ways that have nothing to do with X’s own profits or losses. Despite its cuddly and inspirational appeal, Google is a mature firm whose 2017 revenue will likely surpass $100 billion. Growing Google’s core business requires salespeople and marketers who perform ordinary tasks, such as selling search terms to insurance companies. There is nothing wrong with these jobs, but they highlight a gap—perhaps widening—between Silicon Valley’s world-changing rhetoric and what most people and companies actually do there.

X sends a corporate signal, both internally and externally, that Page and Brin are still nurturing the idealism with which they founded what is now basically an advertising company. Several business scholars have argued that Google’s domination of the market for search advertising is so complete that it should be treated as a monopoly. In June, the European Union slapped Google with a $2.7 billion antitrust fine for promoting its own shopping sites at the expense of competitors. Alphabet might use the projects at X to argue that it is a benevolent giant willing to spend its surplus on inventions that enrich humanity, much like AT&T did with Bell Labs.

All of that said, X’s soft benefits and theoretical valuations can go only so far; at some point, Alphabet must determine whether X’s theories of failure, experimentation, and invention work in practice. After several days marinating in the company’s idealism, I still wondered whether X’s insistence on moonshots might lead it to miss the modest innovations that typically produce the most-valuable products. I asked Astro Teller a mischievous question: Imagine you are participating in a Rapid Eval session in the mid-1990s, and somebody says she wants to rank every internet page by influence. Would he champion the idea? Teller saw right through me: I was referring to PageRank, the software that grew into Google. He said, “I would like to believe that we would at least go down the path” of exploring a technology like PageRank. But “we might have said no.”

I then asked him to imagine that the year was 2003, and an X employee proposed digitizing college yearbooks. I was referring to Facebook, now Google’s fiercest rival for digital-advertising revenue. Teller said he would be even more likely to reject that pitch. “We don’t go down paths where the hard stuff is marketing, or understanding how people get dates.” He paused. “Obviously there are hard things about what Facebook is doing. But digitizing a yearbook was an observation about connecting people, not a technically hard challenge.”

X has a dual mandate to solve huge problems and to build the next Google, two goals that Teller considers closely aligned. And yet Facebook grew to rival Google, as a platform for advertising and in financial value, by first achieving a quotidian goal. It was not a moonshot but rather the opposite—a small step, followed by another step, and another.

Insisting on quick products and profits is the modern attitude of innovation that X continues to quietly resist. For better and worse, it is imbued with an appreciation for the long gestation period of new technology.

Technology is a tall tree, John Fernald told me. But planting the seeds of invention and harvesting the fruit of commercial innovation are entirely distinct skills, often mastered by different organizations and separated by many years. “I don’t think of X as a planter or a harvester, actually,” Fernald said. “I think of X as building taller ladders. They reach where others cannot.” Several weeks later, I repeated the line to several X employees. “That’s perfect,” they said. “That’s so perfect.” Nobody knows for sure what, if anything, the employees at X are going to find up on those ladders. But they’re reaching. At least someone is.

Doctors Get Their Own Second Opinions
October 10th, 2017, 10:30 AM

ADELPHI, Maryland—In a quiet voice and in her native Spanish, the woman explained to Dr. Shantanu Nundy that she had been feeling dizzy whenever she stood up.

She cleaned houses and worked in a store. There was a lot going on at home—and now this. She choked up describing it all.

Nundy’s clinic, called Mary’s Center, is a primary-care practice, and hers was a classic primary-care problem: common, yet strange; vague, yet worrisome—troubling enough to send the woman to the emergency room the day before, sticking her with a $200 bill. Still, the dizzy spells were not definitive enough for the ER to do anything about.

Nundy suspected she had something deep inside her ear that was throwing off her balance. To make sure, he had her perform something called the Dix-Hallpike test: From a sitting position, he asked her to fall back onto the exam table, then toss her head to one side. That would help determine whether the source of the dizziness was a problem in the inner ear.

It didn’t really work. When she sat back up, she felt fine.

Nundy stepped into the hallway and wrote up her case in the clinic’s electronic medical record. But he still wanted to be sure the cause of the dizziness wasn’t a small stroke or something more serious.

He opened a new tab on his computer and went to a new website that he helps design and run: the Human Diagnosis Project, or Human Dx. The project allows primary-care doctors to ask for assistance on difficult cases from an online network of physicians all over the world.

He clicked “get help on a case” and, on a checklist-style page, input that she was “43f”—a 43-year-old female—with episodic dizziness for the past two months. He then submitted the case to a doctor at another Mary’s Center clinic, as well as to Human Dx’s entire database of nearly 7,000 doctors.

Trained in internal medicine, Nundy now leads the nonprofit arm of Human Dx, but he spends Fridays at the clinic as its only provider for adults. (Other doctors and nurses see children there the other days of the week.)

Mary’s Center is a safety-net clinic, so its patients pay according to their income. At just after 8:30, the waiting room was bustling. The staff issued each patient a number and called them back in English and Spanish—“Twenty-six ... veintiseis!”

Nundy says about 80 percent of his patients are uninsured, in some cases because of their immigration status. Even for those with insurance, a specialist might be out of reach because of high deductibles and co-pays or long wait times.

“For you and me, someone who has insurance, the standard of care is that you see an expert who lives and breathes ... your diagnosis,” Nundy says. But for the 28 million uninsured Americans, seeing, say, a dermatologist or a neurologist usually means getting on long waiting lists for a doctor who is willing to volunteer his or her time.

Human Dx might help doctors confirm their suspected diagnoses or think of things to rule out. At Mary’s Center, one man came in complaining of headaches and nausea, and the Human Dx physicians suggested a blood test called an ESR. Another time, Nundy used it to confirm a suspected case of rheumatoid arthritis before putting a low-income patient on a heavy-duty course of medications.

Experienced doctors use Human Dx for their most difficult cases, and newer providers use it to hone their skills. Johns Hopkins Hospital and other teaching hospitals are now using it to train medical residents. Georgia Lewis, a nurse practitioner who works with Nundy, used Human Dx when, two months into her stint at Mary’s Center, all the other providers went on vacation. Rashes can be confounding, so she’ll upload them to Human Dx along with a photo.

The contributors to the project are vetted based on how accurately they’ve solved past cases. Human Dx uses machine learning, which means that eventually the algorithms powering the diagnosis suggestions will become “smarter” based on the input of the doctors using it. The hope is that, over time, Human Dx can help reduce misdiagnoses, which according to studies happen up to 20 percent of the time.

Human Dx hopes to soon roll out to all 1,300 safety-net clinics in the United States. Ron Yee, the chief medical officer of the National Association of Community Health Centers, is helping clinics like Mary’s Center start using the platform. “We thought we can really help our communities because we have challenges getting specialty care,” he said.

Yee and his colleagues are still figuring out how to fit Human Dx into so many primary-care doctors’ workflows. They’re also puzzling through that eternal health-care question: how to get paid for it. “Does insurance accept this?” Yee said. “I don’t know what it looks like.”

Nundy acknowledges that Human Dx adds time to a doctor’s day. But he says researching difficult cases already adds time, as does reading reference materials or calling his med-school friends for their advice. He hopes that as the project progresses, it could count toward doctors’ continuing medical education, licensing requirements, or student loans. Eventually, he hopes to get all the area’s specialists who treat the uninsured on Human Dx, so they can offer their counsel digitally and save their charity care for those who really need to be seen in person.

It usually takes about six hours to get a response through Human Dx, but a little over an hour after Nundy had seen the woman with the dizzy spells, a few responses had already trickled in. The relative likelihood of the doctors’ guesses were represented by little green bars, like a Wi-Fi signal. The most common suggested diagnosis was dehydration, followed by stress, a ministroke, or Ménière’s disease, a disorder of the inner ear. “Now when I see a person with dizziness,” Nundy said, “I’ll think about Ménière’s disease.”

Most likely, the woman was just stressed and tired. But for Nundy and other primary-care doctors using Human Dx, it’s worth carefully considering “the consequences of being wrong. If this was my mom or my sister ... that’s what we would want,” he said. “That’s what patients deserve.”

Against the Travel Neck Pillow
October 10th, 2017, 10:30 AM

Is there a pillow as useless as the U-shaped travel neck pillow? There is not. This half-ovate, toilet-seat cover-esque object reigns as King of Travel Accessories, while failing miserably at its intended sole use. It is a scourge for reasons that I will outline in this essay and of which, by the end, I will convince you without question.

This past summer, I had occasion to travel by plane with such a pillow—memory foam in a pleasant maroon—and did so thoughtlessly, stuffing it into my carry-on as if it were my passport, or a book to ignore while watching, God willing, episodes of Sex and the City on the tiny television. When it came time to attempt sleep I, like many of my fellow passengers, dutifully placed the U-shaped pillow on my shoulders. As my neck protruded an uncomfortable distance from the seat back, I let my head fall to my left. No good. I let my head fall to my right. No good. I scrunched the pillow up, so it was more like a tiny, oddly-shaped normal pillow, but the damn thing kept bouncing back to U-shape, which, by design, has a hole in it, so that was definitely no good.

This damn pillow was no good.

It might come as a shock to you to hear someone speak the truth about U-shaped neck pillows so plainly, as this sort of pillow has been allowed to exist unchecked since it was patented in 1929. I understand and will allow you a moment to compose yourself. Have you taken it? Okay. The U-shaped neck pillow is an unsupportive abomination; a pernicious, deceitful, recklessly ubiquitous travel trinket lulling the masses not to sleep but to a zombielike restlessness for which they have been trained to blame themselves, i.e., “I can’t sleep on airplanes.” The U-shaped travel neck pillow is a useless trash pillow for nobody.

But not everyone agrees. “I bought this pillow for the long-weekend holiday trip. The memory foam is the perfect firmness, and it is so soft and comfortable,” says someone named Ivan in an Amazon review of a neck pillow similar to that which failed me on my recent flight. Okay, Ivan. Someone named Allen says, “I use this in the car. I fall asleep very easy. This keeps my neck comfortable and I don't wake up with neck pain.” Okay, Allen. Someone named Cass says, “I returned it as it had a horrible chemical smell, plus whatever was inside was a solid piece. I wanted something that had little pellets.” Well. This one seems like more of a “Cass” issue, actually.

Brad John, the cofounder of Flight 001, a popular chain of travel stores about which Martha Stewart has allegedly commented, “I love this store, it looks like an airplane,” told me the U-shaped travel pillow sells very well, even though there hasn’t been much innovation in the market. “They’re basically the same as they’ve always been. We sell the heated ones, the inflatable ones, the foam ones.” The main advancement, he said, and the top seller at the moment, is a convertible travel pillow “which you can either make into a regular pillow or a U-neck.” Very interesting that the top-selling U-shaped neck pillow is one that has the ability to function as a normal, non-U-shaped neck pillow.

Brad John himself uses a normal pillow on flights. “I just don’t find the neck pillow comfortable,” he said, “but that’s just personal preference.”

Everyone I spoke with agreed that the U-shaped neck pillow stinks, notably my friend Megan Reynolds who said, “We have one in the house but the boy cat uses it for sex.” My friend Lindsay Robertson, to whom I was referred explicitly because she regularly uses a U-shaped neck pillow on flights, proved to secretly be a member of the U-shaped-neck-pillow resistance: “I never actually use it as a neck pillow, because I can't sleep that way—I'm not sure anyone can,” she told me. Instead, she puts her neck pillow on the tray table in front of her, takes off her glasses, puts her hands in her lap, and “[lets her] face fall completely forward into the pillow, as if [she has] expired.”

What accounts for why some derive comfort from the U-shaped neck pillow—(liars)—and some do not? I asked Mary O’Connor, who is a professor of orthopedics and rehabilitation and the director of the Center for Musculoskeletal Care at Yale. “I’m unaware that there is any clinical data that shows they’re effective in reducing neck strain or neck discomfort,” she said, “However, many of us who travel have experienced falling asleep with our neck in a weird position and it bothering us thereafter. So, I think they can be helpful, but that depends on how they’re used and whether they support the neck.”

The ideal pillow, she said, would keep your head and neck in neutral alignment with your spine, so you’re not too far forward, or backward, or too far to one side or the other. “But how do you know, when you’re in the airport, that the pillow you’re going to purchase is going to give you the right support?” O’Connor asks. “The pillows are all the same. Some people have short necks, some people have long necks, and there’s no ability to look and say, ‘I need this design or this size pillow for my neck, to really work well for me.’ And that’s part of the challenge. Could one of those pillows help someone? Yes, they could. Will they help everyone? Probably not.”

I attempted to find research pointing to the uselessness or usefulness of the dreaded U-shaped neck pillow, and came up empty-handed. However I did find a study titled “The Use of Neck-Support Pillows and Postural Exercises in the Management of Chronic Neck Pain,” which was published in The Journal of Rheumatology in 2016 and dealt with the positive effects of bed-specific neck-support pillows for people with chronic neck pain. I spoke to the study’s coauthor Brian Feldman, a senior scientist and head of the Division of Rheumatology at Toronto’s Hospital for Sick Children, who made sure I understood that his study was not, actually, about the U-shaped travel pillows people use on planes. I understand. I thought he might be able to offer some insight, anyway.

In, he stressed, his own opinion of U-shaped travel pillows, he said, “I can’t stand them. I never use them. They’re not built strongly enough or firm enough. There are all kinds of new gizmos that people have been developing for pillows for sleep in transportation, and they tend to be more like straps that hold your head in place, or boxlike structures that you can sit forward and place your head in, or neck collars, which give you much more support around your neck. Those kinds of things are probably all much better than the typical U-shaped pillow.”

Keeping your neck in a nice physiological position while sleeping is a wonderful thing to do, he said, but the issue with U-shaped pillows is that they aren’t built to be firm enough or high enough to help most people, plus they don’t circle around the neck properly. “They just don’t do the job they’re supposed to do,” Feldman says. In order to work, he thinks they’d have to look more like the kind of rigid neck collar you see on someone who has recently injured their neck, one “that presses up into the head and keeps the chin up and supported so the head doesn’t flop over in any way once you’ve fallen asleep” while sitting up.

Also, don’t they look like the the first-ever stone pillow used by Mesopotamians in 7,000 BC? Seems like we should not still be using a pillow that looks like the first-ever stone pillow used by Mesopotamians in 7,000 BC, but that’s just my opinion.

If I could leave you with one piece of advice, it would be: Take a hard look at whether or not your U-shaped travel pillow is worth toting on your next flight. Are you stuffing it into your carry-on out of usefulness, or out of habit? Is it taking up precious storage space because it will help you sleep, or because you thought you should buy it even though there you’ve encountered no evidence, either personal or scientific, to suggest that this thought is correct? Are you wrong, or do you agree with me? Ask yourself these questions, and then leave the U-shaped pillow behind.

(Unless you’re a boy cat and you’d like to use it for sex.)

Mayonnaise, Disrupted
October 10th, 2017, 10:30 AM

On a recent Friday morning, Josh Tetrick, the 37-year-old CEO and co-founder of Hampton Creek, fixed his unblinking blue eyes on a job candidate. The pair was sitting at a workstation near the entrance to the company’s warehouselike San Francisco headquarters, where Tetrick frequently holds meetings in plain view of the company’s more than 130 employees. Around Tetrick—a muscular ex-linebacker in jeans and a T-shirt—was even more Tetrick: a poster of him watching Bill Gates eat a muffin, a framed photograph of him with a golden retriever, an employee’s T-shirt emblazoned with “What would you attempt if you knew you could not fail?”—one of Tetrick’s many slogans. (Others include “What would it look like if we just started over?” and “Be gorilla.”)

Listen to the audio version of this article here. For more feature stories, read aloud, download the Audm app for your iPhone.

The interviewee, who was applying for a mid-level IT job, started listing his qualifications, but Tetrick seemed more interested in talking about the company’s mission—launching into what he promised was a “non-consumer-friendly” look at the “holy-fuck kind of things” Hampton Creek is doing to ensure “everyone is eating well.” He gestured to a slide deck on a flatscreen TV showing photographs of skinny black children next to one of an overweight white woman. They represented, he said, a handful of the 1.1 billion people who “go to bed hungry every night,” the 6.5 billion “just eating crappy food,” and the 2.1 billion from both groups “being fucked right now” by micronutrient deficiencies. “This is our food system today,” Tetrick said. “It’s a food system that is failing most people in the world. And these pillars of our food system today, we think, need to be rethought from the ground up.”

So far, the most prominent manifestation of Tetrick’s plan to rethink the pillars of our food system is a line of vegan mayonnaise, sold in plain, sriracha, truffle, chipotle, garlic, and “awesomesauce” flavors. Hampton Creek also sells vegan cookies and salad dressings, which are marketed, like the mayo, under the brand Just—a reference to righteousness, not simplicity—in venues ranging from Whole Foods to Walmart. And it sells a powdered egg substitute to General Mills for use in baked goods.

Tetrick insists that Hampton Creek is not a vegan-food producer. He has called it a “tech company that happens to be working with food” and has said, “The best analogue to what we’re doing is Amazon.” Using robotics, artificial intelligence, data science, and machine learning—the full monty of Silicon Valley’s trendiest technologies—Hampton Creek is, according to Tetrick, attempting to analyze the world’s 300,000-plus plant species to find sustainable, animal-free alternatives to ingredients in processed foods.

This pitch has captured the imagination of some of Silicon Valley’s most coveted venture capitalists. Since Hampton Creek’s founding, in 2011, the company has attracted $247 million from investors including Salesforce CEO Marc Benioff, Yahoo co-founder Jerry Yang, and Peter Thiel’s Founders Fund. It was lauded by Gates in 2013 as a hopeful example of “the future of food” and named a World Economic Forum Technology Pioneer two years later. In 2014, Tetrick was cheered as one of Fortune’s 40 Under 40. He wooed a star-studded stable of advisers, including former Health and Human Services Secretary Kathleen Sebelius, and A-list fans such as John Legend and the fashion designer Stella McCartney. Last fall, Hampton Creek was valued at $1.1 billion—surely the first time a vegan egg has hatched a unicorn.

Peter Thiel instructs start-up entrepreneurs to take inspiration from cults, advice that came to mind when Tetrick told me, after the job interview, that he screens for employees who “really believe” in his company’s “higher purpose,” because “I trust them more.” But buying into the mission has become a more complicated proposition, as Hampton Creek has recently been besieged by federal investigations, product withdrawals, and an exodus of top leadership. Silicon Valley favors entrepreneurs who position themselves as prophetic founders rather than mere executives, pursuing life-changing missions over mundane business plans. That risks rewarding story over substance, as the swift implosion of once-celebrated disrupters such as Theranos and Zenefits has shown. Fans of Hampton Creek say that Tetrick is “one of our world’s special people” who “will guide us into the abundant beyond.” Critics allege that he is leading a “cult of delusion.” Either way, he seems to be selling far more than just mayo.

Josh Tetrick at Hampton Creek’s headquarters in San Francisco, August 2017 (Christie Hemm Klok)

The story of how Tetrick founded Hampton Creek, as he has recounted it on numerous conference stages, shows his instinct for a good narrative. As he tells “folks” in his slight southern drawl, he was raised in Birmingham, Alabama, by a mother who worked as a hairdresser and a father who was often unemployed, which meant his family was “on food stamps for most of our life.” (His mother remembers it as “maybe like two weeks or three weeks.” His father could not be reached for comment.) He had dreams of playing professional football (even changing the pronunciation of his surname from Tee-trick to Teh-trick because it “felt more manly,” he told me) and was a linebacker at West Virginia University before transferring to Cornell, where he earned a Fulbright to work in Nigeria. He has said he drew inspiration for Hampton Creek from his seven years in sub-Saharan Africa (three of which he passed, for the most part, in law school at the University of Michigan). Motivated by being raised on “a steady diet of shitty food” in Birmingham and seeing homeless children relying on “dirty-ass water” in Africa, Tetrick launched Hampton Creek to “open our eyes to the problems the world faces.”

Employees can repeat parts of Tetrick’s story from memory, like an origin myth, describing for visitors the Burger King chicken sandwiches and 7-Eleven nachos that Tetrick ate as a kid. (New hires participate in a workshop where they practice reciting their own personal journey toward embracing the company’s mission.)

In his public remarks, Tetrick usually skims over the years prior to launching Hampton Creek, when he, by his own admission, was “lost.” He graduated law school in 2008, joined a firm, then parted ways with it after less than a year—in part, he told me, over an op-ed he published in the Richmond Times-Dispatch in which he critiqued factory farming. (According to Tetrick, the law firm, McGuireWoods, counted the meat processor Smithfield Foods among its clients. The law firm declined to comment.) A vegetarian since college, he had been writing fiery editorials in his spare time calling out the “disgusting abuses” of the industrial food system.

Leaving law allowed Tetrick to throw himself into motivational speaking, which had already been competing with his day job. Two or three times a week, he visited high schools, colleges, and the occasional office to preach the virtues of social entrepreneurship and describe the big money to be earned by doing good. “Selflessness is profitable!” booms Tetrick to a class of graduating seniors in a 2009 video. “Because solving the world’s greatest needs is good for you! Solving the world’s greatest needs intersects with phenomenal career opportunities for you to engage you!”

According to his speaking agency at the time, Tetrick’s credentials included his prior work in President Bill Clinton’s office (a two-month gig); for the government of Liberia (four months); for the United Nations (four months); in Citigroup’s corporate-citizenship group (four months); at McGuireWoods (nine months); and at the helm of his crowdfunding start-up, 33needs (which petered out after less than 11 months). Prior to becoming the CEO of Hampton Creek, Tetrick had held no job for more than a year.

In 2011, Tetrick was largely itinerant and drawing on savings when his childhood friend Josh Balk intervened. Balk, then working on food policy for the Humane Society of the United States, had first gotten Tetrick thinking critically about industrial agriculture back in high school. It was under Balk’s influence that Tetrick became a vegetarian and, in his 20s, set a goal of donating $1 million to the Humane Society by his 33rd birthday. Balk now urged Tetrick to throw himself into a new venture that would draw on his insights about doing well by doing good, and suggested that they launch a start-up that would use plants as a substitute for eggs.

With Balk’s help, Tetrick enlisted David Anderson, the owner of a Los Angeles bistro, whose vegan recipes for foods like cheesecake and crème brûlée helped inform Hampton Creek’s early work. To raise money, they decided to approach Khosla Ventures, which seemed inclined to invest in companies with a social or environmental bent. In a pitch to Samir Kaul, a partner at Khosla, Tetrick spoke of a “proprietary plant-based product” that was “seven years in the making” and “close to perfection.”

Despite his current emphasis on Hampton Creek’s technical chops, Tetrick says he never expressly founded Hampton Creek as a tech start-up. “I didn’t go in and meet with Samir and say, ‘Hey, Samir, just so you know, I’m a technology company,’ ” he recalled. “I went in to him and I said, ‘Food’s fucked up, man. Here’s why. Here’s an example. Here’s what we’re thinking about doing.’ ”

The pitch netted the company $500,000—its first investment.

A video on Hampton Creek’s website shows a creamy white substance being smeared on a piece of toast. Then the camera cuts to scenes of an engineer running computer models and a robot zipping pipette trays around a laboratory. By turning plants into data, a voice-over explains, the company is working to combat both chronic disease and climate change.

This utopian message took some time to evolve. As the company was getting off the ground, Tetrick’s challenge to the industrial food system had a more subversive tone. “To say that we’ve launched a global war on animals just sells the word ‘war’ so pathetically short,” he wrote in 2011 for HuffPost. In a 2013 tedx Talk, shortly before the rollout of Just Mayo, he described the horrors of chicks being fed into “a plastic bag in which they’re suffocated” or “a macerator in which they’re ground up instantaneously.”

Tetrick’s love for animals was on display during a recent visit I made with him to a dog park—chaperoned, as I was at all times, by Hampton Creek’s head of communications. As Tetrick refueled with a four-espresso-shot Americano and a seitan bagel sandwich, we watched his golden-retriever puppy, Elie, run around on the grass. He’d purchased her from a breeder specializing in life extension in dogs, after the death of his beloved eight-year-old retriever, Jake, the previous spring. “Far and away the hardest thing that I’ve ever been through in my life was that,” Tetrick said. Elie, whom Tetrick named after the Holocaust survivor Elie Wiesel because he considers it a “cool name,” flies internationally with Tetrick on long-weekend getaways, dines on dog food made of locally sourced organic vegetables, and accompanies him to work. (Tetrick’s free-roaming pets have been a point of contention for some of Hampton Creek’s food scientists: Jake ate researchers’ cookie prototypes on at least one occasion. Back at Hampton Creek headquarters, I watched Tetrick wipe Elie’s vomit off the floor adjacent to the research kitchen.)

When I brought up the tedx Talk, Tetrick told me he regretted it. “I was too much in my own head in thinking about what motivates me, as opposed to thinking from the perspective of everyone else who’s listening or could see that talk,” he said. “My primary motivator is alleviating animal suffering. For me. For me,” he said, in a conversation he initially wanted off the record, over concerns that it might be a “turnoff” to partners. He paused for a moment, and seemed conflicted about what he’d divulged: “I don’t know if I’ve ever said that to the full company.”

Though he said he still believes “every single word” of his past entreaties, Tetrick has largely sanitized his public remarks of references to animal abuse since finding that they fell flat with the broad group of retailers and shoppers he hopes to attract. He now hews closer to lines such as “We’ve made it really easy for good people to do the wrong things.” Though Tetrick has been a vegan for the past seven years, he discourages his marketing team from using the word vegan to describe Just products. The term, he says, evokes arrogance and wealth and suggests food that “tastes like crap.” Instead he promises customers a bright future where they can eat better, be healthy, and save the environment without spending more, sacrificing pleasure, or inconveniencing themselves. “A cookie can change the world,” Hampton Creek has asserted in its marketing materials.

The message is a rallying cry for a particular kind of revolution. Tetrick launched Hampton Creek in an era when investors were reaching beyond traditional tech companies, and businesses that might otherwise have been merely, say, specialty-food purveyors could leverage software—and grand mission statements tapping into Silicon Valley’s do-gooder ethos—to cast themselves as paradigm-breaking forces. Venture capitalists have poured money into start-ups aiming to disrupt everything from lingerie to luggage to lipstick, with less emphasis on the product than on the scope of the ambition and the promise of tech-enabled efficiencies. Hampton Creek offered idealism that could scale.

Once he’d secured funding from Khosla Ventures, Tetrick leaned into start-up culture. He ditched the couch he’d been crashing on in Los Angeles and rented a renovated garage in San Francisco. In an early press release, Hampton Creek touted Bill Gates—a limited partner in Khosla Ventures—as an investor. Tetrick recruited executives from Google, Netflix, Apple, and Amazon to join his staff, and highlighted their tech backgrounds to backers.

Employees analyzing proteins (left) and testing gelation rates for the forthcoming Just Scramble product line (Christie Hemm Klok)

He also started promoting Hampton Creek’s biotech-inspired “technology platform”: labs that could automate the extraction and analysis of plant proteins, examining their molecular features and functional performance (including gelling, foaming, and emulsifying properties) and then applying proprietary machine-learning algorithms to identify the most-promising proteins for use in muffins, spreads, and other foods. “We are seeing things that no chef, no food scientist, has ever seen before,” the company declares on its website.

Hampton Creek earned glowing press, as Tetrick proclaimed that mayo was merely the beginning of a broader food revolution. David-and-Goliath moments—like a lawsuit brought by Unilever, the producer of Hellmann’s, against Hampton Creek arguing that only spreads containing eggs should be labeled “mayo,” or revelations that members of the American Egg Board and its affiliates had joked about hiring someone to “put a hit on” Tetrick—burnished Tetrick’s disrupter status. (Unilever later dropped the lawsuit.)

Food-industry celebrities joined investors in celebrating Tetrick’s approach. He “will win a Nobel Prize one day,” raved the chef and TV host Andrew Zimmern. He is an underdog (“a tough, gritty guy,” said Kaul) and “already is changing the world,” as the celebrity chef José Andrés marveled after a visit to Hampton Creek. According to friends, family, and associates, Tetrick is an “incredible salesman,” “one of the heroes of our generation,” and possibly a future president.

Lately, the glow around Tetrick and his company has been overtaken by an unforgiving spotlight. In 2015, a Business Insider exposé based on interviews with former employees alleged, among other claims, that Hampton Creek practiced shoddy science, mislabeled its ingredients, and illicitly altered employees’ contracts to slash their severance pay. (In a Medium post, Tetrick dismissed the story as “based on false, misguided reporting.” He did admit that employment agreements had been altered, though he added that he had since “fixed” the situation.) Last year, Bloomberg asserted that Hampton Creek operatives had bought mass quantities of Just Mayo in an attempt to artificially inflate its popularity—prompting investigations by the Department of Justice and the Securities and Exchange Commission, which were eventually dropped. (Tetrick said that the buybacks were in part for quality control and accounted for less than 1 percent of sales.) Bloomberg also reported on claims by a Hampton Creek investor named Ali Partovi—an early backer of Facebook and Dropbox who lasted nine days as Tetrick’s chief strategy officer before leaving the company and severing all ties—that the company was exaggerating profit projections to deceive investors.

More recently, Target pulled Just products from its shelves after an undisclosed source raised food-safety concerns, including allegations of salmonella contamination. (Though an FDA review cleared Hampton Creek, Target—previously one of the brand’s best-performing outlets, according to Tetrick—announced that it was ending its relationship with the company.) In the span of a year, at least nine executive-level employees parted ways with Hampton Creek as rumors swirled that it was losing as much as $10 million a month. (Tetrick declined to comment on Hampton Creek’s finances but said that its turnover was typical of other high-growth companies.)

Lab coats hang in one of Hampton Creek’s R&D areas. (Christie Hemm Klok)

When I first arrived at Hampton Creek headquarters, in June, I expected to find Tetrick in crisis mode. Frankly, I was a little surprised that I’d been allowed to come: Four days before my visit, Tetrick had fired his chief technology officer, his vice president of R&D, and his vice president of business development over a purported coup attempt that seemed to suggest a lack of confidence in the CEO. (None responded to requests for comment.) By the time I arrived, the entire board save Tetrick had resigned.

Yet Tetrick was bubbling about his plans for the future. “I just got done with—and you’re welcome to see it—writing my 10-year vision,” he told me after saying goodbye to the IT-job candidate, as we joined some half a dozen newly hired Hampton Creekers for their inaugural product-tasting in the company’s research kitchen.

Amid gleaming mixers and convection ovens, the cheerful group of 20- and 30-somethings dipped crackers and crudités into ramekins of vegan salad dressing and mayonnaise arranged on a table along with spheres of cookie dough. While I could have easily polished off most of the cookie-dough samples myself, and the dressings were on par with other bottled ranch and Caesar offerings, Just Mayo—which has earned high marks from foodies—tasted to me like a slightly grassier, grainier version of Hellmann’s.

Tetrick was dissatisfied with the array of samples. “Where’s the butter? WHERE’S THE BUTTERRRRRR?” he asked the chef who’d organized the tasting. “You’ve got to get the butter!”

Hampton Creek’s plant-based butter was still a prototype, the chef reminded Tetrick. “The usual protocol for this thing is we show the products that are live on shelves, so that everybody understands what we—”

“What about the Scramble Patty?,” Tetrick interrupted. The patty, a breakfast-sandwich-ready product from their forthcoming egg-replacement line called Just Scramble, was dutifully delivered alongside the butter. Their vegetal aftertaste made clear to me why they had not yet been brought to market.

Hampton Creek has been promising the impending release of Just Scramble for years: In a presentation to potential investors cited by Bloomberg, Tetrick forecast that the mung-bean-based product line would bring in $5 million in sales in 2014—but three years later, it has yet to launch.

Hampton Creek’s plant library, where promising samples are stored for additional research (Christie Hemm Klok)

Tetrick told me that Hampton Creek will debut both a liquid version of Just Scramble and the Scramble Patty early next year, to be followed shortly by a new category of plant-based foods—possibly the butter, or ice cream. Or maybe yogurt or shortening. That’s in addition to the expansion of what Tetrick has branded Just OS (short for “operating system”), an arm of the company focused on licensing its ingredients and methods to food manufacturers. As Tetrick sees it, replacing eggs with his blend of vegan ingredients, which can be regularly tweaked and improved, makes it possible to continuously upgrade everything from cookies to condiments. “While a chicken egg will never change, our idea is that we can have a product where we push updates into the system, just like Apple updates its iOS operating system,” Tetrick has said.

Former Hampton Creek employees, including several involved in its research efforts—all of whom declined to be named for fear of retribution—suggested that the company focused on the appearance of innovation and disruption to the occasional detriment of tangible, long-term goals. They expressed frustration at being asked to reallocate resources from developing digital infrastructure to designing “cool looking” data-visualization tools that seemed like they would be primarily useful for impressing visitors; at having to leave their desks to don lab coats and “pretend to be doing something, because they had VIP investors coming through”; and at being instructed to set up taste tests for members of the public that took time away from product development. “We could’ve done really good science, and instead we were doing performances and circus acts,” one ex-employee told me.

The pursuit of Uber-size valuations has arguably resulted in some start-ups offering technological “solutions” more complicated than the problems they purport to solve. The founder of Juicero, for example, positioned himself as the Steve Jobs of juice when he launched a $699 microprocessor-enabled kitchen appliance that could press packets of chopped fruits and vegetables with enough force to “lift two Teslas”—but a Bloomberg reporter found that squeezing the packets with her hands worked just as well. (In early September, the company—which had attracted more than $100 million in venture-capital funding since its founding four years prior—announced that it was shutting down.)

To be sure, artificial intelligence is not crucial to making vegan mayonnaise: Tetrick has said his inspiration to replace eggs with Just Mayo’s Canadian-yellow-pea protein—a common ingredient in vegan packaged foods—came because he “brought in some biochemists and they ran tests, looking at the molecular weight of plant proteins, the solubility, all sorts of different properties.” Bob Goldberg, a former musician whose company, Follow Your Heart, has sold a vegan mayo called Vegenaise since 1977, told me that his inspiration to replace eggs with soy protein came in a dream. Follow Your Heart debuted its own plant-based egg substitute, VeganEgg, in 2016, after less than a year of development.

In response to ex-employees’ accounts of being derailed by visitor presentations, Tetrick said that communicating the company’s projects to potential investors and partners is essential to its work. But he rejected allegations that Hampton Creek was making fantastical promises or emphasizing image over substance, and suggested that detractors were seeking to subvert the company’s mission for their own gain. He told me that Partovi, his former chief strategy officer, who accused the company of misleading investors, was a dissatisfactory employee who had found the “chaotic” atmosphere of a start-up a “huge shock,” and had back-channel conversations about selling off the company. (Partovi declined to comment.) As for the three recently fired executives, Tetrick said their desired changes would have given more control to investors, whose incentive to go public or accept an acquisition offer might undermine Hampton Creek’s “higher purpose.” When I asked him about the board departures, which were made public after my visit, Tetrick told me that some members had been asked to step down; others “chose to remain members of the advisory board and help the company achieve its mission.”

“There’s one critical filter beyond all the other filters that’s most important,” he told me. “Will this particular decision—whatever that decision is—increase the chances that we will achieve the mission?”

Model grocery aisles at Hampton Creek display Just products. (Christie Hemm Klok)

It is difficult to resist being charmed by Tetrick. He is self-deprecating, joking that it took him six months to learn how to pronounce protein surface hydrophobicity. He exudes confidence, religiously maintains eye contact, and seems disarmingly open: He spoke with me for hours in the office long after his colleagues had gone home and repeatedly volunteered personal text messages for me to read. But his constant emphasis on where Hampton Creek is heading deflects attention from where it is now.

One afternoon during my visit, two Chinese visitors arrived at Hampton Creek for a meeting and joined Tetrick at his customary workstation at the front of the office. The pair had emailed the company’s customer-service department three days earlier, and Tetrick knew little about them besides their vague interest in “alternative proteins.” One of the men, Lewis Wang, now introduced himself as the founder of a venture-capital fund and his companion, who carried a Prada briefcase, as the chairman and CEO of one of China’s largest meat producers. The magnitude of the opportunity was not lost on Tetrick. He immediately summoned a colleague, whom he presented as “one of our lead scientists,” and instructed an employee with the nebulous title of “advocacy” to make sure the men had “the full experience.”

The visitors listened intently while Tetrick teased the company’s forthcoming patents and products, gradually building to the most cutting-edge undertaking of all: Project Jake (named after Tetrick’s deceased dog), Hampton Creek’s push into growing meat and fish in a lab. Tetrick explained how, rather than slaughtering a chicken, scientists could extract stem cells from a bird’s fallen feather and grow them into muscle cells.

Other start-ups in this field, including one co-founded by the creator of the first lab-grown burger prototype, have targeted 2020 as the earliest date for selling so-called cultured meat. Tetrick declared that his goal was to release lab-produced meat before the end of this year. “This is over our expectations,” Wang said. “It’s very exciting.”

Tetrick led the two Chinese men through a spacious room housing Hampton Creek’s team of designers and settled them in a windowless office with a large TV. Tetrick’s filmmaker, one of his longest-serving employees, cued up footage with a Kinfolk vibe: a farmer lovingly cradling a white chicken, a Hampton Creek employee in a field contemplating a single feather as wind rustled his curls. The last shot showed gloved hands snipping the base of a feather into a test tube.

“You are probably the only company that has a media studio here,” Wang remarked. “Other companies, I don’t think they have a communications studio.” But he also noted that the videos hadn’t shown how the stem cells would be transformed into meat: “Where is the growth?”

By way of response, Tetrick whisked the pair back to the design studio to behold another of his visions: a poster-size illustration of families admiring a hangar full of lab-grown hamburger patties—Tetrick’s farm of the future. Trusting in the logic that seeing is believing, he’d distributed framed versions to members of his staff and advised them to mount the drawing in their homes. “You’ve got to be able to see it,” he explained. “I want them to envision the future.”

The future of Hampton Creek that Tetrick would have the world envision is consistently, dazzlingly bright. Besides lab-grown meat and an increasing list of grocery-store staples, he promoted numerous milestones on the cusp of being realized: imminent deals with food manufacturers; patents set to receive approval; the removal of palm oil from Hampton Creek products; the launch of a long-overdue e‑commerce site; and the introduction of Power Porridge, a nutrient-rich cereal he said would be in Liberian schools this fall.

When I asked Tetrick why he was embarking on so many risky, expensive endeavors even as product deadlines slipped by, he acknowledged that a “better entrepreneur” might wait until the company was on more solid footing—but, he told me, “the difference between doing this [now versus] five years from now—or 10 years from now—is literally the difference of billions of animals suffering or not.”

Start-up CEOs frequently exaggerate their ambitions in an effort to attract more cash and justify large valuations: As Oracle’s billionaire co-founder, Larry Ellison, once quipped, “The entire history of the IT industry has been one of overpromising and underdelivering.” In the insular culture of Silicon Valley, where those who know the score often have a vested interest in keeping it hidden, it can be difficult to determine whether a company is poised for breakthrough or breakdown until the very moment of collapse.

Tetrick deposited his guests in the kitchen, where his chefs—“Michelin-star chefs,” Tetrick’s head of communications reminded me—had set a table with elegant earthenware pottery and proper silverware. “Here we have a steamed tamago, a little bit of smoked black sesame, pea tendril, and togarashi,” murmured one chef, setting down a Japanese-style omelet made with the liquid Just Scramble prototype. A vegan feast followed: Japanese chawanmushi custard with smoked kombu seaweed and sake-poached mushrooms, homemade brioche, butter and crackers, and ice cream. So did a live demonstration of the Just Scramble liquid being scrambled like eggs.

“We are very interested to invest if possible,” Wang announced after the meal. “I think Josh looks like a leader,” he told me later. Tetrick, in a rush to get to another meeting, left the two men to continue their tour of the headquarters: past researchers operating robotic arms, chefs laboring over scales, and other employees typing at laptops—a perfect vision of industry.

AIM Was Perfect, and Now It Will Die
October 6th, 2017, 10:30 AM

You kids don’t understand. You could never understand.

You walk around in habitats of text, pop-up cathedrals of social language whose cornerstone is the rectangle in your pocket. The words and the alert sounds swirl around you and you know how to read them and hear them because our culture—that we made—taught you how. We were the first generation to spend two hours typing at our closest friends instead of finishing our homework, parsing and analyzing and worrying over “u were so funny in class today” or “nah lol youre pretty cool.”

That thing you know how to do, that cerebellum-wracking attentiveness to every character of the text message and what it might mean—we invented that. But when we invented it, we didn’t have text messages, we didn’t have Snapchat, we didn’t have group chats or Instagram DMs or school-provided Gmail accounts. We had AIM. We had AOL Instant Messenger.

“How did AIM work?” you ask. It was like Gchat or iMessage, but you could only do it from a desktop computer. (Since we didn’t have smartphones back then, its desktop-delimited-ness was self-explanatory.) You could set lengthy status messages with animated icons in them. And iconic alert noises played at certain actions: the door-opening squeak when someone logged on, the door-closing click when they logged off, the boodleoop for every new message.

“Those status messages,” you say. “What were they like?” As thunderous piano-accompanied art songs were to the sad young men of Romantic Germany, so were status messages to us. They might have a succinct description of our emotional state. Often they consisted of the quotation of vitally important song lyrics: from The Postal Service, from Dashboard Confessional, from blink-182, from Green Day, from The Beatles (only after Across the Universe came out), from RENT and Spring Awakening and The Last Five Years. (We didn’t have Hamilton back then—I shudder to imagine what 2008 would’ve been like if we had.) From Brand New or Taking Back Sunday if you were pissed at your crush.

And then there were, sometimes concurrently with the song lyrics, the pained, cryptic, and egocentric recountings of the emotional trials of the day. Our parents wronged us. Our best friend wronged us. Our chemistry teacher wronged us. But we never actually said that outright; instead, we hinted at their sins and petty slights through suggestion and understatement. That’s right: AIM was so fertile and life-giving that we invented subtweeting to use it. (Gen X-ers: Don’t @ me about how you all proto-subtweeted on CompuServe or Usenet or ENIAC or whatever.)

But status messages were just the golden filigree of the gorgeous AIM tapestry. AIM was everything to us. I really mean that: As 9/11-jittered American parents were restricting access to the places where we could meet in public—the sociologist danah boyd writes about this in her book, It’s Complicated—we had to turn to AIM. So AIM became the original public-private space. AIM was the mall. AIM was the study carrel. AIM was our best friend’s finished basement. AIM was the side of the library where everyone smoked. AIM was the club (see, Hobbes, Calvin and) and da club (see Cent, Fifty). AIM was the original dark social.

We didn’t ask for someone’s number, at least not then—an errant month of texting in 2005 could still cost $45, an exorbitant figure to the teenage mind—so we asked for their AIM. Or we got their AIM from someone else. (We usually had to tread carefully around the ask.) And over a couple months, we assembled buddy lists of our friends and teammates and crushes and classmates. Their away lights twinkled in a constellation of teenage social possibility.

“What did you even talk about?” All the same stuff you text about now. We asked if they had copied down the math problem sets. We asked how far you were supposed to read tonight in Gatsby. (Then we didn’t do the reading.) We complained about how Mr. O’Brien was mean to freshmen. We talked about the high-school musical, about the ending of Donnie Darko, about God and religion. We used lol to stand in not only for laughter or humor, but for any inarticulable mass of any emotion at all. We talked about who had sex with who. We talked a lot about love. We felt the world shiver and transform when our crush logged on and—boodleoop—started messaging us.

We made our first attempts, on AIM, of transfiguring our mysterious and unpredictable thoughts into lively and personable textual performances. We were witty and dramatic. We invented our online selves—we invented ourselves.

We got bored. Myspace and Xanga helped us set up temporary and ramshackle museums of our tastes. Then Facebook came along, with all the frisson of “only college students use it,” and we drifted there. Its pseudo-maturity and time-delayed interactions allured us. Our AIM status messages went to Facebook instead: It was where we mourned the end of the field-hockey season or the final showing of the winter musical. We posted photos of each other on Facebook and liked them and commented on them—but sometimes still chatted about them on AIM. We asked homework questions via each other’s walls. We wrote subtweety openings as our Facebook status, hoping our crush would comment there instead. Eventually Facebook had its own chat product too, and it made more sense to use that, or Gchat, or to just text.

And then we graduated from high school, and some of us moved far away, and as mobile semi-adults spread across campus, AIM didn’t make logistical sense anymore. Our usernames, laden with Harry Potter and Hot Topic references, were kind of embarrassing anyway. We got bored with the sweet and secret internet of our youth, and we began the hard adult work of building our personal brands, watching prestige television, and purchasing different forms of financial insurance (renter’s, medical, dental, life).

But for years AIM was still there—simply, silently, warmly beckoning for anyone to return. You didn’t hear it. You texted instead, or made Instagram stories. We texted instead, too. It’s how we navigate our lives now.

So now, on December 15, AIM will leave us forever. “AIM is signing off for the last time,” said the product team in a tweet on Friday. “Thanks to our buddies for making chat history with us!”

AIM showed us how to live online, for good and for ill. We all live our whole lives in text chains and group threads now. We plan every hangout, we send every news article, we proclaim every relationship in the river of text it taught us to sail. Honestly, that river has been a little scary lately. Instant messaging, once a special thrill, now sets the texture of our common life. But AIM taught us how to live online first. So AIM, my old buddy, don’t feel bad if you see us shedding a tear. We know what you have to do. For we’ll see you waving from such great heights—

“Come down now,” we’ll say.

But everything looks perfect from far away.

“Come down now,” but you’ll stay.

How to Escape a Death Spiral
October 6th, 2017, 10:30 AM

“Death spiral!” President Trump tweeted in May, about the Affordable Care Act. It had been a common accusation of Republicans even earlier. Media, pundits, and think tanks all weighed in on whether or not the label applies to Obamacare and its health-care exchanges.

Today, death spiral means “a marketplace spinning out of control,” as FiveThirtyEight’s Anna Maria Barry-Jester puts it. It’s an accusation that demands an urgent response. In a death spiral, destruction is so near and so inevitable that any attempt to avoid it becomes valid. By evoking the dwindling seconds before a plane crash, every other option looks better by comparison.

Yet death spirals have another story to tell. Before the death spiral was a figure of speech, it was a physical problem aviators needed to solve: how to keep from crashing when they flew through clouds or fog. How they solved real death spirals in the air might help explain how to resist the narrowed choices metaphorical death spirals impose.

* * *

In the early decades of flight, aviators were bedeviled by bad weather. Those who encountered poor visibility mid-flight told harrowing tales of disorientation and confusion. Surrounded on all sides by milk-white fog or hazy darkness, pilots entered a world where nothing behaved as it should. When they observed the plane slipping into a gentle descent, they corrected to gain altitude, only to find the plane diving downward faster. Or, when they were certain the plane was flying level, the turn indicator would register a turn to the right. What the instrument registered as level, meanwhile, felt like a turn to the left.

Under these conditions, bailing out often became the best option. Those who didn’t often joined their plane as it crashed into the ground.

Lost in the clouds, these pilots had fallen prey to a form of sensory disorientation known as a death spiral, or, more commonly, a graveyard spiral. The term describes an almost instinctive set of maneuvers pilots undertake when they lose sight of the horizon. The graveyard spiral begins when a plane flying in these conditions enters a gentle turn. As it turns, the plane will begin to descend, picking up speed.

Death spirals occur because the pilot feels the descent but not the turn. That has to do with the way the human body relies both on the visual and vestibular systems to perceive its orientation in space. As fluid moves through the small canals in the inner ear, the brain registers the body’s shifts in position. The fluid moves when the head turns, creating the sensation that the vessel under control is doing the turning. In mid-flight, though, the fluid can settle in place. If this happens, a turn can feel like level flight. In this situation, a pilot who follows the instruments and levels the plane’s wings feels, with absolute certainty, that the craft is turning in the opposite direction.

A pilot who recorrects to what feels level in his or her body simply reinitiates the spiral dive. Likewise, pulling back on the yoke to gain altitude without leveling the wings only tightens the plane’s downward spiral. Without a clear view of the horizon to correct against, the pilot can become so disoriented that a total loss of control results, ending in a crash. An Air Safety Institute scare-tactic training video, “178 Seconds to Live,” follows a pilot through the disorientation of a classic graveyard spiral.

Once it became clear that aviators were becoming disoriented in the clouds, they set themselves to the task of figuring out how to avoid it. This was the birth of what is known as “instrument flight.” Planes already carried basic instruments, such as turn and bank indicators, but these were primarily seen as navigational devices—implements that helped pilots reach a destination rather than keep the plane in the air. To tame the death spiral, these devices had to become part of how aviators kept control of the plane.

In 1932, William Ocker and Carl Crane published Blind Flight in Theory and Practice, a detailed guide to flying by instruments through darkness and fog. Ocker and Crane’s method relied on giving the pilot a visual reference against which to double-check the body’s fallible sensations. A turn and bank indicator shows the wings’ departure from level flight, and an artificial horizon visually represents the plane’s relation to the ground.

But designing and implementing instruments was the easy part. It was harder to teach pilots to believe what their instruments reported over (and against) the persuasive sensations they felt in their bodies. Here Ocker and Crane ran up against aviators’ long-standing belief that they controlled the plane, at least in part, through their superior “air sense”—their body’s special ability to maintain its equilibrium in flight. The idea that skillful flight depended on the body’s perception of its own weight and relative position, sometimes called “deep sensibility” or kinesthesia, was a truism among pilots. (Aviators referred to this skill as their ability to “fly by the seat of the pants,” a phrase that connoted, perhaps falsely, skill more than luck.)

Ocker and Crane started demonstrating the limits of the pilot’s body, spinning skeptical pilots in chairs until they were dizzy, or showing them the curves their bodies traced when they tried to walk a straight line without the aid of vision. They even blindfolded homing pigeons and threw them out of a plane to demonstrate that even nature’s best fliers would lose all sense of orientation without sight. (The pigeons spiraled helplessly, Ocker and Crane reported, until they finally spread their wings parachute-style and floated, unharmed, to the ground.) This “inherent spiral tendency” lived in everyone, Ocker and Crane argued, and it would show itself if not restrained by a competing vision of the horizon. Hence the aviator’s need for instruments: They gave back the horizon clouds and fog had obscured.

A wary stance toward bodily perceptions would become a guiding principle for instrument flight. Early U.S. military training documents instructed pilots that their inner ears provided information that was “not at all reliable,” for instance. Ocker and Crane gave pilots a set of practical lessons in how to reference them to keep control of the plane. As pilots learned to trust their instruments, flight through clouds and fog became commonplace, safe, and mundane. The death spiral, meanwhile, was replaced by a simpler imperative: Check your instruments, and believe them.

* * *

Pilots still talk about death spirals, especially to warn amateurs of the dangers of flying into fog and haze. More commonly, though, the term claims that a social organization is on the brink of collapse: small towns, department stores, utility markets, liberal-arts colleges, Apple before Steve Jobs’s return as CEO, the island of Puerto Rico (pre- and post-Hurricane Maria), even the State Department under Rex Tillerson.

The use that is most resonant today—the death spiral as what ails insurance markets—traces back to a 1998 article by two economists describing an “adverse-selection death spiral,” in which insurance plans become financially unsustainable when there are too few healthy, low-cost subscribers enrolled. Economists and businesspeople have played a leading role in the death spiral’s transition to metaphor, converting the individual danger pilots faced into a shorthand for market forces endowed with the inevitability of natural law. They draw on the death spiral’s sense of urgency, meanwhile, to heighten the stakes of corporate failures. The term demands drastic action while rationalizing choices that might follow that imperative.

But its metaphorical life abandons the work that made death spirals in aviation avoidable—the steady, mundane habit of cross-referencing one’s fallible perceptions to the reality of the horizon. As a metaphor, the death spiral is all problem and no solution; it preserves the original’s diagnosis but abandons its cure.

This absence seems particularly lamentable in current discussions of the ACA, given how intensely felt most people’s policy positions seem to be. The death spiral works as a metaphor in this case because it fits neatly into a larger narrative of scarcity. That young healthy people are not buying health insurance on the exchanges seems a rational choice, given their precarious financial state. When the majority of Americans worry they will be unable to maintain their standard of living, the idea that benefits like Obamacare are about to collapse under their own weight makes intuitive sense similarly to how aviators’ bodies rationalize their false perceptions in the air.

The death spiral’s lesson is that logic that seems intuitive needs to be calibrated against measured reality. The perception that the ACA is in a death spiral, for example, requires calibration against the realities of spending decisions and wealth distribution. America pays more than any other industrialized country for its health care, which nevertheless does less to extend its citizens’ lives. About half of the nation’s discretionary spending goes to the military. Great wealth is concentrating in the hands of a diminishing few.

Against this horizon, the urgency and narrowness implied by the ACA’s supposed death spiral looks less insistent. If the aviators win more options when checking their bodily impulses against the horizon, so too the citizenry might find more room to maneuver by expanding its view of possible maneuvers.

I don’t mean to make this process sound easy; it’s not. There’s a reason that the aviator and writer Wolfgang Langewiesche, writing in Harper’s in 1941, described instrument flight as “the castigation of the flesh translated into aeronautical terms.” Orienting their bodies to a horizon that was obscured required pilots to resist the sensations that keep humans upright at every moment. Likewise, resisting the death spiral as metaphor requires pushing back against the normal and the everyday.

Metaphorical death spirals lure people toward forced (and false) choices—choices that endorse actions in concord with fear. It’s not that it feels good to believe disaster is imminent; it’s that it feels real—the perceptions bodies and minds feel intuitively ground people’s thoughts and actions. Perhaps this is why the death spiral is such a powerful metaphor today, when catastrophe feels like the background to everyday life. But there’s also hope in the death spiral: Crashes aren’t inevitable—so long as there’s instrumentation to help find a horizon.


This article appears courtesy of Object Lessons.

Gossip Girl's Prophetic Relationship With Technology
October 6th, 2017, 10:30 AM

The surveillance state has a Blogspot.

At least that’s what it looks like in the opening credits of Gossip Girl, when the titular website flashes on the screen, and Kristen Bell, the narrating omniscient voice of Gossip Girl herself, intones: “Gossip Girl here: your one and only source into the scandalous lives of Manhattan’s elite.”

The CW / Netflix

The site that obsessively monitors, and regularly ruins, the characters’ lives looks like it was made on the classic Blogger platform: There’s a header, a series of bordered posts that run straight down the middle, and a left rail full of links. It’s the most iconic of the many ways that the show, which turns 10 years old this year, is a perfect time capsule of the technology of its time. And while it now feels dated in some respects, it was remarkably prescient about the compulsive relationship people would end up having with their devices.

Gossip Girl was a show about ultra-privileged teens and their infinitely morphing romantic entanglements and high-society social battles. But it was also a show about lives lived in the spotlight of the internet, in the liminal era just before most of America dove headfirst into palm-sized screens.

Technology was integral to Gossip Girl’s premise and plots. Without cameraphone-wielding looky-loos invading the privacy of Serena, Blair, Dan, Chuck, and Nate, there would be no show. So many plotlines hinged on secrets, but it usually only took a couple episodes before Gossip Girl ensured those secrets were revealed, and the writers had to find something new for the characters to hide.

The show’s creators treated technology with the detailed attention befitting its central role, to the point that “we would have companies like Verizon come in and show us prototypes of new models coming up in the future,” Joshua Safran, the show’s executive producer, told Vulture. “We would come up with plotlines based on what we knew would be tech coming out in the future.” Nothing but the newest and shiniest for Manhattan’s elite.

One interesting thing about the Gossip Girl era was the sheer variety of phones available. Before we all coalesced around a touch-screen rectangle as the best possible mobile-phone design, there were BlackBerries with their full keyboards; the Motorola Razr, a super-skinny flip phone; the LG Chocolate, which came in fun colors and slid open to reveal its number pad. All of these appear in the show’s first season, a reflection of the technological diversity of the time.

A Motorola BlackBerry, two LG Chocolates, and a Motorola Razr, as featured in Gossip Girl
(The CW / Netflix)

If the show were filmed today, all the Constance Billard/St. Jude’s students would have iPhones. (Serena’s would be gold and Blair’s would be rose gold. I’m certain of this.)

This was a show in which text messages were often major plot points, but this was before anyone had thought to depict texts as free-floating typography in a shot (an idea widely credited to Sherlock), which meant there were a lot of close-ups of cellphone screens.

The attention the show paid to technology was both incredible production design and a great opportunity for product placement. Watching it today, it feels extremely evocative of 2007, in a good way, but also sometimes in a hilariously dated way.

There is a plot point mid-first season that revolves around a videotape. A literal tape, from a camcorder:

The CW / Netflix

Blair studies for the SATs with this handheld Princeton Review device:

The CW / Netflix

And at one point Serena bonds with her boyfriend Dan’s best friend, Vanessa, over a round of Guitar Hero. Is there anything more 2007 than Guitar Hero?

The CW / Netflix

Sure, one can certainly get one’s jollies by watching Blake Lively pretend to be totally crushing it playing “Free Bird” on that plastic guitar. (Her fingers barely move! As someone who devoted way too much time to getting good at Guitar Hero, I’m offended by this shallow performance.) But where these Upper East Siders were ahead of the curve was in the tightness of the grip technology had on their lives.

The teens of Gossip Girl had codependent, toxic relationships with their phones in a way that would be intimately familiar to many people now, even those who aren’t constantly living in fear of their personal lives being blogged about. Though it was possible in the late ’00s to subscribe to text-message updates from RSS feeds, or SMS alerts from news organizations, for the most part cellphones were still thought to be just for calling and texting people you knew. But Gossip Girl’s characters were using their phones to monitor the news. (By “news” I mean rumors about their very small social circle, but still.) It was unclear whether they’d signed up to get notifications from the Gossip Girl blog, or whether the anonymous blogger just had everyone’s numbers to send “e-blasts” to. These e-blasts were also inconsistent in form—sometimes they appeared as emails:

The CW / Netflix

And sometimes as texts:

The CW / Netflix

It was not uncommon for all the characters to be in a room together, probably at a lavish penthouse party, and for all their phones to go off simultaneously. Then they’d all check them at once, creating a tableau that was strikingly similar to a modern group of people reacting to a breaking-news notification:

The CW / Netflix

If I encountered this in real life today I’d be more likely to expect that North Korea had launched a missile than that my friend’s ex had been spotted with another woman.

Several of the characters—well, let’s be real, mostly Blair Waldorf—exhibited a double standard around privacy. Blair fiercely protected her own secrets, and was devastated when Gossip Girl revealed embarrassing facts about her private life. But she also frequently sent tips in to the blog about others, for her own ends. And all the characters, however they may have hated the blog, still read it regularly. This is a more extreme version of how anybody today might engage in Facebook-stalking, or other digital dirt-gathering, on people in their lives, even as they might worry about what’s discoverable about themselves online.

People have only entrusted more of their personal information to the internet—especially to their smartphones—over time. “It was once said that a person’s eyes were a window to their soul,” Blair says at one point in season one, as she’s forwarding messages from a stolen phone to herself. “That was before people had cellphones.” That certainly hasn’t become less true since then.

The CW / Netflix

The role of the actual Gossip Girl blog diminished as the seasons went on, and the show’s quality declined as well. At the end, the nonsensical reveal of which main character was behind the blog entirely missed the point. That wasn’t a mystery that needed to be solved. The point of Gossip Girl wasn’t who she was; it was that she was watching.

The show was about scandal, and privilege, and the greatest love affair in 21st-century television history (Blair + Chuck 4eva), but it was also about the ways a person’s public and private life can blur in the internet age, with or without their consent. And that’s a theme that feels more relevant than ever. XOXO.

The Computer That Predicted the U.S. Would Win the Vietnam War
October 5th, 2017, 10:30 AM

At just about the halfway point of Lynn Novick and Ken Burns’s monumental documentary on the Vietnam War, an army advisor tells an anecdote that seems to sum up the relationship between the military and computers during the mid-1960s.

“There’s the old apocryphal story that in 1967, they went to the basement of the Pentagon, when the mainframe computers took up the whole basement, and they put on the old punch cards everything you could quantify. Numbers of ships, numbers of tanks, numbers of helicopters, artillery, machine gun, ammo—everything you could quantify,” says James Willbanks, the chair of military history at U.S. Army Command and General Staff College. “They put it in the hopper and said, ‘When will we win in Vietnam?’ They went away on Friday and the thing ground away all weekend. [They] came back on Monday and there was one card in the output tray. And it said, 'You won in 1965.’”

This is, first and foremost, a joke. But given the emphasis that Secretary of Defense Robert McNamara placed on data and running the number—I began to wonder if there was actually some software that tried to calculate precisely when the United States would win the war. And if it was possible that it once gave such an answer.

The most prominent citation for the apocryphal story comes in Harry G. Summers’ study of the war, American Strategy in Vietnam: A Critical Analysis. In this telling, however, it is not the Johnson administration doing the calculation but the incoming Nixon officials:

When the Nixon Administration took over in 1969 all the data on North Vietnam and on the United States was fed into a Pentagon computer—population, gross national product, manufacturing capability, number of tanks, ships, and aircraft, size of the armed forces, and the like. The computer was then asked, “When will we win?” It took only a moment to give the answer: “You won in 1964!”

He said “the bitter little story” circulated “during the closing days of the Vietnam War.” It made the point that there “was more to war, even limited war, than those things that could be measured, quantified, and computerized.”

There’s no doubt that Vietnam was quantified in new ways. McNamara had brought what a historian called “computer-based quantitative business-analysis techniques” that “offered new and ingenious procedures for the collection, manipulation, and analysis of military data.”

In practice, this meant creating vast amounts of data, which had to be sent to computing centers and entered on punch cards. One massive program was the Hamlet Evaluation System, which sought to quantify how the American program of “pacification” was proceeding by surveying 12,000 villages in the Vietnamese countryside. “Every month, the HES produced approximately 90,000 pages of data and reports,” a RAND report found. “This means that over the course of just four of the years in which the system was fully functional, it produced more than 4.3 million pages of information.”

A computer graphic predicting that the Republic of Vietnam would control 92.7% of Vietnam by December 1969
Data produced by the Hamlet Evaluation System (Dave Young)

Once a baseline was established, decision makers could see progress. And they wanted to see progress, which created pressure on data gatherers to paint a rosy picture of the portrait on the ground. The slippage between reality and the model of reality based on data became one of the key themes of the war.

“The crucial factors were always the intentions of Hanoi, the will of the Viet Cong, the state of South Vietnamese politics, and the loyalties of the peasants. Not only were we deeply ignorant of these factors, but because they could never be reduced to charts and calculations, no serious effort was made to explore them,” wrote Richard N. Goodwin, a speechwriter for Presidents Kennedy and Johnson. “No expert on Vietnamese culture sat at the conference table. Intoxicated by charts and computers, the Pentagon and war games, we have been publicly relying on and calculating the incalculable.”

All of which the “apocryphal story” condenses into a biting joke.

But was there actually a computer somewhere in the Pentagon that was cranking out “When will we win the war?” calculations?

On October 27, 1967, The Wall Street Journal ran an un-bylined blurb from its Washington, D.C., bureau on the front page talking about a “victory index.”

U.S. strategists seek a “victory index” to measure progress in the Vietnam War. They want a single statistic reflecting enemy infiltration, casualties, recruiting, hamlets pacified, roads cleared. Top U.S. intelligence officials, in a recent secret huddle, couldn’t work out an index; they get orders to keep trying.

Now, a victory index is not quite a computer program you can ask “When will we win the war?” But it’s pretty close! A chart could be plotted. Projections could be made from current progress to future ultimate success. At the very least, we can say that officials tried to build a system that could be the kernel of truth at the center of a certainly embellished story.

And it doesn’t seem out of the question that the specific error—showing the United States had already won—could have actually occurred. As the intelligence officials tried different models to make sense of all their numbers, it certainly seems possible that some statistical runs would, in fact, return the result that the peak of the victory index had already occurred. That the war had been won.

In a world besotted by data, the apocryphal story about the Pentagon computers reminds us that the model is not the world, and that ignoring that reality can have terrible consequences.

How Sputnik Launched an Era of Technological Fragility
October 4th, 2017, 10:30 AM

On October 4, 1957, a beach ball-shaped satellite launched into space from the Kazakh desert. The satellite joined Earth’s journey around the sun, which is why its creators named it Sputnik, Russian for “traveling companion.” Sputnik circled the planet about every hour-and-a-half, traveling at 18,000 miles per hour as it emitted a steady beep, beep, beep. On the ground, people watched Sputnik through binoculars or listened to its pings on ham radios. By January of the following year, Earth’s traveling companion fell out of its orbit and burned up in the planet’s atmosphere.  

Sputnik’s spectators could not have anticipated that this event—the launch of the first human-made satellite into space—would ignite a race to the stars between the United States and the Soviet Union. Nor could they have known that they were, all of them, standing at the precipice of a new era in human history of near-complete reliance on satellite technology. For them, Sputnik was a sudden flash of innovation, something at which to marvel briefly. For their children and grandchildren and generations after, satellites would become the quiet infrastructure that powered the technology that runs their world.

“Many people grasp that satellites are important in our lives, but they may not see exactly in what ways,” said Martin Collins, a curator at the space-history department of the Smithsonian National Air and Space Museum.

So what would happen if all the satellites orbiting Earth suddenly, all at once, stopped working?

The effects would be felt unevenly around the world, Collins said. In communities that don’t rely on satellite technology, particularly in the developing world, potential disruptions to daily life likely would be less severe. In other places, like in the United States, the results would be severe at best. If the blackout persisted long enough, they’d be catastrophic.

If the satellites shut down, “tentacles of disruption,” as Collins put it, would begin to unfurl.

Without operational communications satellites, most television would disappear. People in one country would be cut off from the news reports in another. The satellite phones used by people in remote areas, like at a research station in Antarctica or on a cargo ship in the Atlantic, would be useless. Space agencies would be unable to talk to the International Space Station, leaving six people effectively stranded in space. Militaries around the world would lose contact with troops in conflict zones. Air-traffic controllers couldn’t talk to pilots flying aircraft over oceans.

Richard Hollingham described how this loss of would feel in a Wellesian story in the BBC in 2013: “The rapid-communications systems that tied the world together were unraveling. Rather than shrinking, it seemed as if the Earth was getting larger.”

Without global navigation satellites, the Global Positioning System (GPS)—the network of satellites and ground stations that tell us exactly where we are—would crumble. Some of the immediate effects would be frustrating, but not debilitating, like not being able to use a smartphone to find your way around a new city or track a run in a fitness app. Other effects would have far-reaching consequences. Millions of truckers and other workers in the delivery industry rely on GPS to crisscross the country each day, delivering food, medicine, and important goods.

The loss of GPS also would have disastrous results for our sense of time. GPS satellites are equipped with atomic clocks, which provide the very precise time that satellites need to calculate distance on Earth and tell GPS-enabled devices about their location. Satellites transmit this time to receivers on the ground, where power companies, banks, computer networks, and other institutions synchronize their operations to it. Without these clocks, the electrical grid, financial transactions, and, yes, the internet would start to fall apart. So too would the internet of things, the vast web of devices that talk to each other on our behalf.

“GPS is staggeringly integrated into our lives,” Collins said.

The shutdown of weather and remote-sensing satellites would gravely hamper our ability to predict weather events, like the major hurricanes that have swept across the Caribbean and southeastern United States this year. Farmers couldn’t get information that informs their crop and water management, and scientists wouldn’t have data for their studies of Earth’s features or climate change.

The disruption of every one of the hundreds of operational satellites orbiting Earth is unlikely, but even the loss of one or a few satellites could have powerful effects. When one communications satellite fell out of its orbit in 1998, 80 percent of pager users in the United States——about 45 million people—lost service. An article in The Los Angeles Times a couple of days later sought to emphasize the fragility of the nation’s behind-the-scenes satellite infrastructure. “Paging is hardly the only consumer convenience delivered via satellite technology,” it warned.

Satellite operations could get knocked out by natural phenomena, like powerful solar storms, or human activity, like one nation’s intentional destruction of another’s  fleet of satellites, or an all-out global war. Space junk could also set off a series of collisions that damage any satellites in their path. Collins said that the cause of a complete blackout of satellites would likely determine how people respond to it. Chaos, for a time, is likely inevitable, and there are plenty of suggestions for this doomsday scenario in apocalyptic science-fiction writing.

“Would it severely disrupt the way we live right now? Yes,” Collins said. “Would people be starving in the streets or would there be civil disobedience? That’s hard to say. Potentially.”

Would anything good come out of it? Perhaps, Collins said, when the power grid fails and people are left in the darkness, they could see, many of them for the first time, the unobstructed night sky, with the stars of the Milky Way stretching out before them. They could look up and gaze at the place where their traveling companions, now silent, float along with them.

The Social Experiment Facebook Should Run
October 4th, 2017, 10:30 AM

Facebook’s greatest strength—its ability to identify and connect like-minded people—is also a major vulnerability. Over the past month, the company has revealed that Russia-linked accounts purchased thousands of fake political ads on its platform around the 2016 U.S. election. These ads “microtargeted” Americans based on their divisions along political, racial, and religious lines. Some, as CNN recently reported, specifically targeted voters in Michigan and Wisconsin, two of the most heavily contested states.

The apparent goal was to sow distrust among voters, perhaps even shape how they voted.

As an initial response, Facebook announced that it will close the loopholes that allow Russian-backed sources—or any other foreign powers—to open fake accounts. While a productive start, this doesn’t go after the underlying problem that Russian operatives capitalized on: the extreme polarization of Americans on political issues. Wittingly or not, Facebook has taken on a central role in American democracy. Now the company has to decide how proactive it wants to be to become “a force for good,” as Mark Zuckerberg has promised.

One step Facebook could take in this direction: reverse-engineer the very algorithms used by the Russians. Facebook could try an experiment of matching Americans across political lines to help bridge the country’s deep divide.

Key to understanding why the Russian operatives’ efforts worked is looking at the way in which people build social networks online and the value they get from them. In Bowling Alone, the Harvard professor Robert Putnam uses the phrase “social capital” to describe this process, which he explains happens in two ways: “Bonding” is social capital built by connecting within exclusive homogenous groups; “bridging” is social capital built by connecting with inclusive heterogeneous groups. Both are valuable—while bonding offers support and solidarity, bridging helps people expand their perspectives and creates trust across diverse groups.

“Bonding social capital constitutes a kind of sociological superglue,” Putnam writes, “whereas bridging social capital provides a sociological WD-40.”

Facebook is primarily a mechanism for bonding, not bridging. Studies show that in the vast majority of cases, people live in self-made echo chambers on Facebook that reinforce their existing views of the world. You need look no further than the “red feeds” and “blue feeds” on any given issue to see that in general, when people connect on Facebook, they are mostly connecting with others who have similar political beliefs, educational backgrounds, and religious outlooks.

Although bridging is possible—say, when your old high-school friend who stayed local while you flew across the country for college offers to connect with you—the ability to choose your network and “hide,” “unfriend,” or even “block” people with whom you no longer want to engage makes it essentially an exclusive network. Facebook further amplifies this segregation by using data from a user’s social network and activities on the platform to custom-tailor a News Feed that aggregates posts it knows that user wants to see, often reinforcing worldviews. This insularity allowed Russia’s $100,000 investment in “dark ads” to reach roughly 10 million Americans before and after the election in discrete demographic and geographic circles.

Facebook’s emphasis on bonding over bridging also has consequences for how people build trust. The relationship researcher John Gottman has found that successful romantic relationships depend on making frequent deposits in each partner’s “emotional bank account.”  Consistent positive interactions increase levels of trust in the relationship, so that when conflict arises, there are enough “reserves” in place to make a withdrawal, but still leave the relationship in a net-positive place. In fact, Gottman estimates that every relationship needs at least five positive interactions to maintain equilibrium with a single negative interaction.

Applying Gottman’s “bank account” model to social relationships can help explain why it’s difficult to have meaningful disagreements on political issues. Americans today spend an average of six-and-a-half hours each day online, with almost a third of that time on social media. If their social-media diets include relatively insular circles like Facebook, their daily positive interactions are likely occurring more with people they already agree with, and less with people from across groups with different perspectives. In fact, in 2017, odds are that Americans will most likely interact with someone who holds different political views when they’re screaming at them from the other side of a protest line, or inside an angry internet forum.

Without a way to make regular, positive deposits in social relationships that bridge political lines, every civic debate is a withdrawal without social reserves, leaving people perpetually overdrawn.

Some research supports the idea that frequent and meaningful interactions between diverse Facebook users can promote the flow of new ideas across otherwise unconnected groups. Jonny Thaw, a spokesperson for the company, pointed out a 2014 study that looked at how the platform creates the “bridging social capital” described by Putnam. The study, which was conducted by researchers unaffiliated with Facebook, found that “weaker ties” in someone’s network (like a friend of a friend, or someone with whom you would not have other offline connections) offered the platform’s users the most potential for users to expand their worldview, because these connections opened the door to new information and diverse perspectives.

More importantly, however, was that the users who benefited the most from their weak social ties—in terms of expanding their outlook—were those who actively engaged in what the study’s authors call “Facebook Relationship Maintenance Behaviors,” like “responding to questions, congratulating or sympathizing with others, and noting the passing of a meaningful day.”

In other words, simply being connected to Facebook users from different backgrounds isn’t enough to make people open to new perspectives and ideas; users need to actively make deposits in each other’s social bank accounts in order to truly benefit from those diverse connections. The study notes that facilitating bridging among its users “may lie in technical features of the site that lower the cost of maintaining and communicating with a larger network of weak ties.”

This study points to some creative ways that Facebook can promote political bridging among its users—and develop some WD-40 against threats to democracy in the process. Let’s say that Facebook created a new feature called “Friend Swap” for users interested in creating connections with people outside of their political bubble. The company could use its powerful algorithms to match users with someone who, based on their individual preferences and posts, they disagree with politically, but have some things in common with personally. What’s important is that the users don’t engage over political issues, at least until they’ve had time to build some social trust. If you’re a liberal, you might not be so open to being thrown whole-hog into a conservative stranger’s feed and reading their posts from Fox News. But you may find some common ground around, say, rooting for the same sports team, or shared musical tastes or experiences, like being a veteran.

A feature like Friend Swap would selectively share only the posts of each user’s feed in an area they have in common with their political counterpart, and allow them to interact on that topic. After a trial period, the “swapped” posts might include ones on another common interest, and so on, until the users elect, if they eventually choose, to actually be “friends.” By creating connections around common interests or experiences, users would make deposits in each other’s social bank accounts over time. If they do become full-on friends, they would be more likely, at least in theory, to be open to a dialogue on differing viewpoints on political issues from someone they’ve come to trust based on bonding in other areas. Hopefully, at the very least, they could agree to disagree while maintaining their connection, which is still a win in today’s climate.

Of course, Friend Swap won’t be a panacea for political differences. It requires people to view online relationships—with strangers—as being valuable enough to invest their time. And the self-selection of people who opt in to this kind of feature might be the same people who would be more open to different viewpoints anyway. But even as an experiment, Friend Swap would be an opportunity for Facebook to gather data on how it can bridge its red and blue silos over shared values like civility, openness, tolerance, and respect. It would also offer a new way to connect people from politically polarized geographic regions, like the Rust Belt and the coasts.

Trying to “socially engineer” relationships, even for the purposes of political cross-pollination, might go against the grain of a company that has been built upon a principle of fierce neutrality. But Russian operatives’ attempts to use Facebook to disrupt American democracy demonstrates that neutrality no longer seems be an option, if it ever really was one in the first place. On Yom Kippur, the Jewish day of atonement, Zuckerberg acknowledged as much, asking for forgiveness for “the ways [Facebook] was used to divide people rather than bring us together.” Facebook has the talent and the resources to help unite people in defense of democratic values, if it has the will to do it.

What Would Flying From New York to Shanghai in 39 Minutes Feel Like?
October 3rd, 2017, 10:30 AM

It’s just after sunrise in New York City. The sky is bathed in pinks and orange as people walk along a long dock toward a white ship. They board the vessel and it sails out to a launchpad further out in the water, where a spaceship strapped to a giant rocket awaits. After they pile in, the rocket blasts off into the atmosphere. About 39 minutes later, they land halfway around the world, in Shanghai.

This is the scenario imagined by SpaceX founder Elon Musk, who discussed the futurist transport system in a speech in Australia last week about the company’s long-term ambitions. The not-yet-built system—which Musk nicknamed BFR, for “big fucking rocket”—would, someday, ferry passengers from one major city to another. Long-distance trips from Bangkok to Dubai, or from Honolulu to Tokyo, for example, would take about 30 minutes and cost about as much as an economy airline ticket.

(The BFR would also subsume the duties of SpaceX’s current fleet of rockets and spacecraft, like the Falcon 9 and Dragon capsule, by launching satellites into orbit, transporting astronauts and cargo to the International Space Station, and even bringing humans to the moon and Mars.)

The news of the Earth transport system was thrilling for Musk’s fans, for whom a speech from the entrepreneur about space exploration is akin to an Apple launch event. The future is really here, or at least quickly approaching! Imagine setting your smartphone to rocket mode instead of airplane mode. Rocket travel, an animated video of this future seemed to suggest, would be a breeze.

Well, not necessarily. To make a half-hour trip, the BFR would have to travel thousands of miles per hour, with a maximum speed of about 16,700 miles per hour, according to SpaceX. The flight would expose passengers to sensations they don’t usually encounter while traveling, like intense gravitational forces and weightlessness. The spaceship would definitely need to stock barf bags.

Musk explained in a tweet that travelers would experience g-forces between 2 and 3, which means twice or three times their body weight. “Will feel like a mild to moderate amusement park ride on ascent and then smooth, peaceful, and silent in zero gravity for most of the trip until landing,” he said.

“That may not be a very comfortable way to travel,” said Ge-Cheng Zha, a mechanical and aerospace professor at the University of Miami who studies supersonic flight. “Not everyone can take it.”

The ride will be most intense during landing and takeoff. The rapid acceleration and deceleration could lead to motion sickness. So could a quick peek out the window during a particularly twisty maneuver. “There’s a disconnect between the g-force and what the person sees, which can lead to severe motion sickness,” said Andy Feinberg, a geneticist at Johns Hopkins University in Maryland who studies astronaut health (and applied to be an astronaut himself in 1979).

Feinberg has flown aboard NASA’s now-retired zero-gravity plane, which simulates weightlessness by taking a series of dives. The speed of the aircraft matches the speed of the passengers as they fall, which creates the experience of free-falling. Feinberg remembers not everyone on board could handle the shifts.“The NASA people, the astronauts, and I were having the time of our lives,” he said. Everyone else around them was throwing up.

Aside from the flight experience and the discomfort it may bring, there’s a host of other factors Musk and his engineers will need to consider before the BFR becomes reality. While the actual trip may indeed take about a half-hour, preparing and unloading the passengers could take hours. The nature of the travel could increase the time required for security checks, luggage checks, and whatever new safety procedures flight attendants may have to present. (At least some flight attendants, for what it’s worth, seem game for the BFR. While it’s too soon to give a “definitive opinion” on rocket travel, Sara Nelson, the president of the Association of Flight Attendants-CWA International, said flight attendants have the “flexibility to adapt to new conditions.”)

There’s also the question of fuel. Launching the BFR, which will stand 106 meters tall, nearly double the height of the Falcon 9, will require tremendous energy, Zha said. A rocket is much harder to get off the ground than, say, a supersonic plane, he said. The Concorde, a now-retired commercial supersonic airliner that carried passengers from New York to London in under three hours, among other destinations, used about three times as much fuel as a Boeing 747. “The reason we use rockets for space delivery is because there’s no other options,” Zha said. “On Earth, airplanes are way more efficient.”

The transport system will also face questions from the U.S. government and other nations in the BFR’s flight path about the rocket’s safety risks and environmental impact. The rocket’s introduction would require the regulation of an entirely new commercial industry.

Still, many won’t be deterred. The BFR doesn’t exist yet, so the coolness factor outweighs all others. Asked whether he would hitch a ride on the BFR, Feinberg said, “in a second.”

When Working From Home Doesn’t Work
October 3rd, 2017, 10:30 AM

In 1979, IBM was putting its stamp on the American landscape. For 20 years, it had been hiring the greats of modernism to erect buildings where scientists and salespeople could work shoulder-to-shoulder commanding the burgeoning computer industry. But that year, one of its new facilities—the Santa Teresa Laboratory, in Silicon Valley—tried an experiment. To ease a logjam at the office mainframe, it installed boxy, green-screened terminals in the homes of five employees, allowing them to work from home.

The idea of telecommuting was still a novelty. But this little solution seemed effective. By 1983, about 2,000 IBMers were working remotely. The corporation eventually realized that it could save millions by selling its signature buildings and institutionalizing distance work; the number of remote workers ballooned. In 2009, an IBM report boasted that “40 percent of IBM’s some 386,000 employees in 173 countries have no office at all.” More than 58 million square feet of office space had been unloaded, at a gain of nearly $2 billion. IBM, moreover, wanted to help other corporations reap the same officeless efficiencies through its consulting services. Leading by example was good marketing.

Then, in March of this year, came a startling announcement: IBM wanted thousands of its workers back in actual, physical offices again.

The reaction was generally unsparing. The announcement was depicted, variously, as the desperate move of a company whose revenues had fallen 20 quarters in a row; a veiled method of shedding workers; or an attempt to imitate companies, like Apple and Google, that never embraced remote work in the first place. “If what they’re looking to do is reduce productivity, lose talent, and increase cost, maybe they’re on to something,” says Kate Lister, the president of Global Workplace Analytics, which measures (and champions) working from home.

IBM might have seen this coming. A similarly censorious reaction greeted Yahoo when it reversed its work-from-home policy in 2013. Aetna and Best Buy have taken heat for like-minded moves since. That IBM called back its employees anyway is telling, especially given its history as “a business whose business was how other businesses do business.” Perhaps Big Blue’s decision will prove to be a mere stumble in the long, inevitable march toward remote work for all. But there’s reason to regard the move as a signal, however faint, that telecommuting has reached its high-water markand that more is lost in working apart than was first apparent.

How could this be? According to Gallup, 43 percent of U.S. employees work remotely all or some of the time. As I look to my left, and then to my right, I see two other business-casual-clad men hammering away on their laptops beside me at a Starbucks just outside Chicago. They look productive. Studies back this impression up. Letting Chinese call-center employees work from home boosted their productivity by 13 percent, a Stanford study reported. And, again according to Gallup, remote workers log significantly longer hours than their office-bound counterparts.

Another batch of studies, however, shows the exact opposite: that proximity boosts productivity. (Don’t send call-center workers home, one such study argues—encourage them to spend more time together in the break room, where they can swap tricks of the trade.) Trying to determine which set of studies to trust is—trust me—a futile exercise. The data tend to talk past each other. But the research starts to make a little more sense if you ask what type of productivity we are talking about.

If it’s personal productivity—how many sales you close or customer complaints you handle—then the research, on balance, suggests that it’s probably better to let people work where and when they want. For jobs that mainly require interactions with clients (consultant, insurance salesman) or don’t require much interaction at all (columnist), the office has little to offer besides interruption.

But other types of work hinge on what might be called “collaborative efficiency”—the speed at which a group successfully solves a problem. And distance seems to drag collaborative efficiency down. Why? The short answer is that collaboration requires communication. And the communications technology offering the fastest, cheapest, and highest-bandwidth connection is—for the moment, anyway—still the office.

Consider the extremely tiny office that is the cockpit of a Boeing 727. Three crew members are stuffed in there, wrapped in instrument panels. Comfort-wise, it’s not a great setup. But the forced proximity benefits crew communication, as researchers from UC San Diego and UC Irvine demonstrated in an analysis of one simulated flight—specifically the moments after one crew member diagnoses a fuel leak.

A transcript of the cockpit audio doesn’t reveal much communication at all. The flight engineer reports a “funny situation.” The pilot says “Hmmm.” The co-pilot says “Ohhhh.”

Match the audio with a video of the cockpit exchange and it’s clear that the pilots don’t need to say much to reach a shared understanding of the problem. That it’s a critical situation is underscored by body language: The flight engineer turns his body to face the others. That the fuel is very low is conveyed by jabbing his index finger at the fuel gauge. And a narrative of the steps he has already taken—no, the needle on the gauge isn’t stuck, and yes, he has already diverted fuel from engine one, to no avail—is enacted through a quick series of gestures at the instrument panel and punctuated by a few short utterances.

It is a model of collaborative efficiency, taking just 24 seconds. In the email world, the same exchange could easily involve several dozen messages—which, given the rapidly emptying fuel tank, is not ideal.

This brings us to a point about electronic communications technologies. Notionally, they are cheap and instantaneous, but in terms of person-hours spent using them, they are actually expensive and slow. Email, where everything must literally be spelled out, is probably the worst. The telephone is better. Videoconferencing, which gives you not just inflection but expression, is better still. More-recent tools like the workplace-communication app Slack integrate social cues into written exchanges, leveraging the immediacy of instant-messaging and the informality of emoji, plus the ability to create a channel to bond over last night’s #gameofthrones.

Yet all of these technologies have a weakness, which is that we have to choose to use them. And this is where human nature throws a wrench into things. Back in 1977, the MIT professor Thomas J. Allen looked at communication patterns among scientists and engineers and found that the farther apart their desks were, the less likely they were to communicate. At the 30-meter mark, the likelihood of regular communication approached zero.

The expectation was that information technology would flatten the so-called Allen Curve. But Ben Waber, a visiting scientist at MIT, recently found that it hasn’t. The communications tools that were supposed to erase distance, it turns out, are used largely among people who see one another face-to-face. In one study of software developers, Waber, working alongside researchers from IBM, found that workers in the same office traded an average of 38 communications about each potential trouble spot they confronted, versus roughly eight communications between workers in different locations.

The power of presence has no simple explanation. It might be a manifestation of the “mere-exposure effect”: We tend to gravitate toward what’s familiar; we like people whose faces we see, even just in passing. Or maybe it’s the specific geometry of such encounters. The cost of getting someone’s attention at the coffee machine is low—you know they’re available, because they’re getting coffee—and if, mid-conversation, you see that the other person has no idea what you’re talking about, you automatically adjust.

Whatever the mechanisms at play, they were successfully distilled into what Judith Olson, a distance-work expert at UC Irvine, calls “radical collocation.” In the late 1990s, Ford Motor let Olson put six teams of six to eight employees into experimental war rooms arranged to maximize team members’ peripheral awareness of what the others were up to. The results were striking: The teams completed their software-development projects in about a third of the time it usually took Ford engineers to complete similar projects. That extreme model is hard to replicate, Olson cautions. It requires everyone to be working on a single project at the same time, which organizational life rarely allows.

But IBM has clearly absorbed some of these lessons in planning its new workspaces, which many of its approximately 5,000 no-longer-remote workers will inhabit. “It used to be we’d create a shared understanding by sending documents back and forth. It takes forever. They could be hundreds of pages long,” says Rob Purdie, who trains fellow IBMers in Agile, an approach to software development that the company has adopted and is applying to other business functions, like marketing. “Now we ask: ‘How do we use our physical space to get on and stay on the same page?’ ”

The answer, of course, depends on the nature of the project at hand. But it usually involves a central table, a team of no more than nine people, an outer rim of whiteboards, and an insistence on lightweight forms of communication. If something must be written down, a Post‑it Note is ideal. It can be stuck on a whiteboard and arranged to form a “BVC”—big, visual chart—that lets everyone see the team’s present situation, much like the 727’s instrument panels. Communication is both minimized and maximized.

Talking with Purdie, I began to wonder whether the company was calling its employees back to an old way of working or to a new one—one that didn’t exist in 1979, when business moved at a more stately pace. In those days, IBM could decide what to build, plan how to build it, and count on its customers to accept what it finally built at the end of a months-long process. Today, in the age of the never-ending software update, business is more like a series of emergencies that need to be approached like an airplane’s fuel leak. You diagnose a problem, deliver a quick-and-dirty solution, get feedback, course-correct, and repeat, always with an eye on the changing weather outside.

I asked Purdie whether IBM’s new approach could be accomplished at a distance, using all the new collaborative technology out there. “Yes,” he said. “Yes, it can. But the research says those teams won’t be as productive. You won’t fly.”

Google and Facebook Failed Us
October 2nd, 2017, 10:30 AM

In the crucial early hours after the Las Vegas mass shooting, it happened again: Hoaxes, completely unverified rumors, failed witch hunts, and blatant falsehoods spread across the internet.

But they did not do so by themselves: They used the infrastructure that Google and Facebook and YouTube have built to achieve wide distribution. These companies are the most powerful information gatekeepers that the world has ever known, and yet they refuse to take responsibility for their active role in damaging the quality of information reaching the public.

BuzzFeed’s Ryan Broderick found that Google’s “top stories” results surfaced 4chan forum posts about a man that right-wing amateur sleuths had incorrectly identified as the Las Vegas shooter.

4chan is a known source not just of racism, but hoaxes and deliberate misinformation. In any list a human might make of sites to exclude from being labeled as “news,” 4chan would be near the very top.

Yet, there Google was surfacing 4chan as people desperately searched for information about this wrongly accused man, adding fuel to the fire, amplifying the rumor. This is playing an active role in the spread of bad information, poisoning the news ecosystem.

The problem can be traced back to a change Google made in October 2014 to include non-journalistic sites in the “In the News” box instead of pulling from Google News.

But one might have imagined that not every forum site could be included. The idea that 4chan would be within the universe that Google might scrape is horrifying.

Worse, when I asked Google about this, and indicated why I thought it was a severe problem, they sent back boilerplate.

Unfortunately, early this morning we were briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries. Within hours, the 4chan story was algorithmically replaced by relevant results. This should not have appeared for any queries, and we’ll continue to make algorithmic improvements to prevent this from happening in the future.

It’s no longer good enough to note that something was algorithmically surfaced and then replaced. It’s no longer good enough to shrug off (“briefly,” “for a small number of queries”) the problems in the system simply because it has computers in the decision loop.

After I followed up with Google, they sent a more detailed response, which I cannot directly quote, but can describe. It was primarily an attempt to minimize the mistake Google had made, while acknowledging that they had made a mistake.

4chan results, they said, had not shown up for general searches about Las Vegas, but only for the name of the misidentified shooter. The reason the 4chan forum post showed up was that it was “fresh” and there were relatively few searches for the falsely accused man. Basically, the algorithms controlling what to show didn’t have a lot to go on, and when something new popped up as searches for the name were ramping up, it was happy to slot it as the first result.

The note further explained that what shows up in “In the News” derives from the “authoritativeness” of a site as well as the “freshness” of the content on it. And Google acknowledged they’d made a mistake in this case.

The thing is: This is a predictable problem. In fact, there is already a similar example in the extant record. After the Boston bombings, we saw a very similar “misinformation disaster.”

Gabe Rivera, who runs a tech-news service called Techmeme that uses humans and algorithms to identify important stories, addressed the problem in a tweet. Google, he said, couldn’t be asked to hand-sift all content but “they do have the resources to moderate the head,” i.e., the most important searches.

The truth is that machines need many examples to learn from. That’s something we know from all the current artificial-intelligence research. They’re not good at “one-shot” learning. But humans are very good at dealing with new and unexpected situations. Why are there not more humans inside Google who are tasked with basic information filtering? How can this not be part of the system, given that we know the machines will struggle with rare, breaking-news situations?

Google is too important, and from what I’ve seen reporting on them for 10 years, the company does care about information quality. Even from a pure corporate-trust and brand perspective, wouldn’t it be worth it to have a large enough team to make sure they get these situations right across the globe?

Of course, it is not just Google.

On Facebook, a simple search for “Las Vegas” yields a Group called “Las Vegas Shooting /Massacre,” which sprung up after the shooting and already has more than 5,000 members.

The group is run by Jonathan Lee Riches, who gained notoriety by filing 3,000 frivolous lawsuits while serving a 10 year prison sentence after being convicted for stealing money by impersonating people whose bank credentials had been phished. Now, he calls himself an “investigative journalist” with Infowars, though there is no indication he’s been published on the site, and given that he also lists himself as a former male underwear model at Victoria’s Secret, a former nuclear scientist at Chernobyl, and a former bodyguard at Buckingham Palace, his work history may not be reliable.

The problems with surfacing this man’s group to Facebook users is obvious to literally any human. But to Facebook’s algorithms, it’s just a fast-growing group with an engaged community.

Most people who joined the group looking for information presumably don’t know that the founder is notorious for legal and informational hijinks.

Meanwhile, Kevin Roose of The New York Times pointed out that Facebook’s Trending Stories page was surfacing stories about the shooting from Sputnik, a known source of Russian propaganda. Their statement was, like Google’s, designed to minimize what had happened.

“Our Global Security Operations Center spotted these posts this morning and we have removed them. However, their removal was delayed, allowing them to be screen-captured and circulated online,” a spokesperson responded. “We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused.”

All across the information landscape, looking for news about the shooting within the dominant platforms delivered horrifying results. “Managing breaking news is an extremely difficult problem but it's incredible that asking the search box of *every major platform* returns raw toxic sewage,” wrote John Hermann, who covers the platforms for The New York Times.

For example, he noted that Google’s conglomerate mate at Alphabet, YouTube, was also surfacing absolutely wild things and no respected news organization.

As news consumers, we can say this: It does not have to be like this. Imagine a newspaper posting unverified rumors about a shooter from a bunch of readers who had been known to perpetuate hoaxes. There would be hell to pay—and for good reason. The standards of journalism are a set of tools for helping to make sense of chaotic situations, in which bad and good information about an event coexist. These technology companies need to borrow our tools—and hire the people to execute on the principles—or stop saying that they care about the quality of information that they deliver to people.

There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.