Symposium Magazine » RSS
Science Journalism and the Art of Expressing Uncertainty
December 31st, 2013, 05:59 AM
Journalists are at work in a press room
It is all too easy for unsupported claims to get published in scientific publications. How can journalists address this?   Note: This piece was originally published on August 4, 2013. Journalism is filled with examples of erroneous reporting turning into received opinion when reporters, editors, and the public take a story at face value after it came from a generally trusted source. Consider, for example, the claims of Iraq’s weapons of mass destruction, or the various public and corporate scandals where authorities ranging from government officials to the chairman of General Electric are taken at their word. As a scientist, I am concerned about the publication and promotion of speculative research, but I also believe that journalists can address this problem. Indeed, the traditional journalistic tool of interviewing knowledgeable outsiders can help if the focus is on the aspects of uncertainty associated with any scientific claim. Modern science is, by and large, a set of research directions rather than a collection of nuggets of established truths. In science reporting, the trusted sources are respected journals that actually are not infallible and often publish thought-provoking but speculative claims as settled truth. The story continues from there: The journal or the authors themselves promote the work in the news media, and established outlets report the claims without question. The journalists involved are implicitly following an assumption: If an article is published in a well regarded publication, treat it as true. In fact, this is a dangerous supposition. Just to cite a few recent examples, news media have reported a finding that African countries are poor because they have too much genetic diversity (published in the American Economic Review); that parents who pay for college will actually encourage their children to do worse in class (American Journal of Sociology); and that women’s political attitudes show huge variation across the menstrual cycle (Psychological Science). Each of these topics is, in its own way, fascinating, but the particular studies have serious flaws, either in the design of their data collection (the political attitudes study), the analysis (the study of college grades), or the interpretation of their data analysis (the genetic diversity study). Flawed research can still contribute in some way toward our understanding—remember our view of science as a set of research directions—but journalists can mislead their readers if they present such claims unquestioningly. The statistical errors in these published papers are important but subtle—subtle enough so that all three were published in the top journals in their fields. Papers such as this represent a fundamental difficulty in science reporting. On one hand, they are flawed in the sense that their conclusions are not fully supported by their data (at least, according to me and various other observers); on the other, we cannot expect a typical science reporter on his or her own to catch methodological errors that escaped several peer reviewers as well as the articles’ authors. My goal here is to suggest a strategy for science writers to express uncertainty about published studies without resorting to meaningless relativism. I will get to my recommendations in the context of a paper from 2007 by sociologist Satoshi Kanazawa on the correlation between attractiveness of parents and sex of children. Some detail is required here because this is necessary to understand the statistical problems with this paper. But my ultimate reason for talking about this particular example is that it demonstrates the challenge of reporting on statistical claims. This study was reported in what I view as an inappropriately uncritical way in a leading outlet for science journalism, and I will address how this reporting could be improved without requiring some extraordinary level of statistical expertise on the part of the journalist. I brought this case up a few years ago at a meeting of the National Association of Science Writers, when I spoke on the challenges of statistical inference for small effects. Using a dataset of 3,000 parents, Kanazawa found that the children of attractive parents were more likely to be girls, compared to the children of less attractive parents. The correlation was “statistically significant”—that is, there was less than a 5% chance of seeing a difference this extreme if there were no correlation in the general population. This result, along with some more general claims about evolutionary psychology, was published in the Journal of Theoretical Biology and received wide media exposure. But Kanazawa’s claims were not supported by the data in the way claimed in his paper. Simply put, his sample size was so small that it would be essentially impossible to learn anything about the correlation between parental beauty and child’s sex in the population. This may sound surprising, given that a sample size of 3,000 seems large. But it is not given the scientific context. There is a vast scientific literature on the human sex ratio, and any plausible differences in the probability of a female birth, comparing beautiful and ugly parents, would have to be very small: on the order of one-third of a percentage point or less. For example, it could be that the probability of having a girl is 48.9% for attractive parents and 48.7% for unattractive parents. It turns out that you would need a sample size far greater than 3,000 to detect such a small effect. To develop your intuition on this, consider national opinion polls, which typically interview about 1,500 people and have a margin of error of three percentage points either way. If you crunch the numbers, you would find that you need a representative sample of hundreds of thousands of people to detect differences of less than one-third of a percentage point. So from a mathematical standpoint, Kanazawa’s study never had a chance to provide an adequate estimate for what it was purporting to estimate. What about the claim of statistical significance, namely, that a pattern as extreme as in the data would occur by chance less than 5% of the time? The answer is that events that are somewhat rare will happen...
Game Theory Is Useful, Except When It Is Not
December 30th, 2013, 05:59 AM
Nobel Laureates Beijing Forum 2005
The study of strategic interactions is gaining popularity across disciplines, but that does not mean its relevance is universal.   Note: This article was originally published on July 8, 2013. Although game theory is now a household name, few people realize that game theorists do not actually study “games” — at least not in the usual sense of the word. Rather, we interpret a “game” as a strategic interaction between two or more rational “players.” These players can be people, animals, or computer programs; the interaction can be cooperative, competitive, or somewhere in between. Game theory is a mathematical theory and, as such, provides a slew of rigorous models of interaction and theorems to specify which outcomes are predicted by any given model. Sounds useful, doesn’t it? After all, many people are familiar with one of game theory’s most famous test cases: the Cold War. It is well-known that game theory informed U.S. nuclear strategy, and indeed, the interaction between the two opposing sides — NATO and the Warsaw Pact — can be modeled as the following game, which is a variation of the famous “Prisoner’s Dilemma.” Both sides can choose to either build a nuclear arsenal or avoid building one. From each side’s point of view, not building an arsenal as the other side builds one is the worst possible outcome, because it leads to strategic inferiority and, potentially, destruction. By the same token, from each side’s point of view, building an arsenal while the other side avoids building one is the best possible outcome. However, if both sides avoid building an arsenal, or both sides build one, neither side has an advantage over the other. Both sides prefer the former option because it frees them from the enormous costs of a nuclear arms race. Strangely enough, though, the only rational strategy is to build an arsenal, whether the other side builds one (in which case you are saving yourself from possible annihilation) or does not (in which case you are gaining the strategic upper hand). This analysis gave rise to the doctrine of MAD: Mutually Assured Destruction. The simple idea is that the use of nuclear weapons by one side would result in full-scale nuclear war and the complete annihilation of both sides. Given that nuclear stockpiling is unavoidable, MAD at least guaranteed that no side could afford to attack the other. So it would seem that game theory has saved the world from thermonuclear war. But does one really need to be a game theorist to come up with these insights? Game theory tells us, for example, that different forms of stable outcomes exist in a wide variety of “games” and computational game theory gives us tools to compute them. But the type of strategic reasoning underlying Cold War policy does not directly leverage deep mathematics — it is just common sense. More generally, one can argue that game theory — as a mathematical theory — cannot provide concrete advice in real-life situations. In fact, one of the most forceful advocates of this point is the well-known game theorist Ariel Rubinstein, who claims that “applications” of game theory are nothing more than attaching labels to real-life situations. In an article that rehashes his well-known views, Rubinstein cites the euro zone crisis, which some say is a version of the Prisoner’s Dilemma, to argue that “such statements include nothing more profound than saying that the euro crisis is like a Greek tragedy.” In Rubinstein’s view, game theory is first and foremost a mathematical theory with a “nearly magical connection between the symbols and the words.” By contrast, he contends, for the purpose of application, we should see game theory as a “collection of fables and proverbs” that can provide an interesting perspective on real-life situations but not give specific recommendations. Michael Chwe, a professor of political science at the University of California, Los Angeles, offers a different take, arguing in his latest book that novelist Jane Austen is, in fact, a game theorist. After describing a scene from Mansfield Park, Chwe writes: “With this episode, Austen illustrates how in some situations, not having a choice can be better. This is an unintuitive result well known in game theory.” Another of Austen’s game-theoretic insights has explicit applications: “When a high-status person interacts with a low-status person, the high-status person has difficulty understanding the low-status person as strategic. … This can help us understand why, for example, after the U.S. invaded Iraq, the resulting Iraqi insurgency came as a complete surprise to U.S. leaders.” To Chwe, Austen studied the principles of strategic interaction on the level of Rubinstein’s “fables and proverbs.” But if we take his conclusion — this makes Austen a game theorist — this means that these fables and proverbs lie at the core of game theory, rather than at game theory’s periphery, where it interfaces with popular culture. Chwe makes a convincing case that Austen was keenly interested in studying how people manipulate each other — and, indeed, that is one of the things that make Austen a great writer. But that does not necessarily make her a great game theorist. In fact, as a mathematical and scientific theory, game theory often falls short when it is applied to complex situations like international relations or parliamentary balance of power. However, in some situations, game theory can be useful in the scientific, prescriptive sense. For example, game theory is useful for, well, playing games. Modern software agents that play games like poker (such as the ones from Tuomas Sandholm’s group at Carnegie Mellon University) do in fact use rather advanced game theory, augmented with clever equilibrium-computation algorithms. Game theory actually works better when the players are computer programs, because these are completely rational, unlike human players, who can be unpredictable. Game theory is also useful for designing auctions. To give a concrete example from my own experience, consider the surprisingly lively Pittsburgh real-estate market, where multiple buyers typically submit simultaneous bids for one house without seeing each...
Still Waiting for Change
December 30th, 2013, 05:59 AM
Denny's Offers Free Breakfast In Effort To Aggressively Promote Sales
Economists are ignoring a class of workers whose wages have been frozen for decades.   Note: This article was originally published on August 5, 2013. Since its inception, the minimum wage has provoked fiery debate. Indeed, when the Fair Labor Standards Act (FLSA) set the first federal minimum wage at $0.25 in 1938, the National Association of Manufacturers deemed it “a step in the direction of communism, Bolshevism, fascism, and Nazism.” Today, it remains a politically divisive issue. While many Democrats, including President Barack Obama, are calling for an increase at the federal level, numerous Republicans, including centrists in the party, would abolish it altogether. Amid political stalemate, low-wage workers have been galvanized into action, as seen by recent strikes across the country, from Macy’s to McDonald’s. The minimum wage happens to be one of the most studied topics by economists and policy analysts. Yet a puzzle remains: there is scant interest among economists – including those who study labor economics – and the broader policy community on the second major tier of the minimum wage system, the “sub-minimum” wage received by tipped workers. (A search among the top ten economic journals, for example, produces no articles on this subject in the last ten years.) This split in wage tiers was established in 1966, when Congress amended the FLSA to allow for a sub-minimum wage for tipped workers. While sub-minimum wage levels for students, youth and workers in training have long been allowed as temporary, the 1966 law made the “tipped wage” permanent through its “tip credit” provision. At that time, employers of tipped workers were allowed to pay a base wage of only half of the regular minimum wage, with the other half provided through customer tips, which is considered “credit” toward the employee’s total wage. This framework is legal as long as the sum of the tip wage and customer tips amounts to the regular minimum wage. In short, customer tips are not wholly a gift or token of gratitude from the served to the server but a wage subsidy provided to employers. In 1966, employers and customers shared equally in contributing to the wages of tipped workers. As the law intended, the tipped wage paid by the employer and the tipped credit from the customer were each half of the regular minimum wage. Over the next three decades, the official tip credit provision sometimes dropped as low as 40%, and never exceeded 50% of the regular minimum wage. As the situation stands today, at the federal level, the maximum tip credit allowance is $5.12, which is equal to the minimum wage ($7.25) minus the tipped wage ($2.13). The $2.13 tipped wage is now just 29% of the regular minimum wage, while the tip credit afforded to employers makes up 71%. What happened? Ironically, it was the Minimum Wage Increase Act of 1996 that initially caused this relative drop in the tipped wage. Signed into law by President Bill Clinton, the act increased the federal minimum wage from $4.25 to $4.75 an hour but froze the tipped minimum wage at $2.13 an hour under heavy pressure from the restaurant lobby. At the time, the $2.13 tipped wage had been in effect since 1991. This means that the sub-wage floor we have today has actually been in effect for 22 years. And when lawmakers took up an FLSA amendment in 2007 to raise the minimum wage in three steps, the tipped wage was again left off the table. Inflation has also eroded the purchasing power of both wage floors, but the fundamental cause behind the decline of the tipped wage has been the long decades of inaction. Today, its real value is at its lowest level since it was established in 1966. Over time, the ratio between the two wages fell from 50% in 1966 to just 29% in 2013, which means the tipped wage has fallen more than 20 percentage-points against the federal minimum wage. The subsidy afforded to employers ($5.12) is now more than twice the base wage they actually pay their workers. In short, most of the money these workers receive is from customer tips, not from their employer. In the absence of federal action, states have stepped in to institute a mix of wage floors of their own. Under various state policies, we have a system where wait staff in Texas are paid an hourly wage as low as $2.13, while a server at the same restaurant chain in Washington State earns a base wage of at least $9.19 an hour. This example reflects a range of both wage tiers across the country, which changes the tip credit amount that employers are allowed to claim. This is because the joint wage system depends on both the regular and the tipped minimum wages. The first determining factor is whether a state follows the federal regular minimum wage of $7.25 or whether it has adopted a higher state minimum. The second factor is which tip credit provision the state falls into: full, partial, or no. A full-tip credit state takes advantage of the maximum allowable tip credit, enabling payment of the lowest sub-minimum wage ($2.13 per hour, the federal tipped minimum wage). A partial-tip credit state has a sub-minimum wage that is above $2.13 but below the binding minimum wage for that state. Finally, states that require employers to pay tipped workers the binding regular minimum wage are referred to as no-tip credit states; at a minimum, tipped workers are paid the same as non-tipped workers. The figure below shows the three basic tip credit categories. The red states allow the full-tip credit and a sub-wage of $2.13. The blue states have tipped wages above $2.13 but below the binding regular minimum—the tip credit amount varies in these states. Those colored in grey do not allow the sub-minimum wage; in those cases, that has been the policy for a long time.     These six general scenarios — determined by the three tip credit provisions and...
Understanding the Irrational Commuter
December 30th, 2013, 05:59 AM
tokyo traffic
The increasing sophistication of data collection and analysis gives us deeper insights into human behavior — and how we make decisions about everyday travel.   Note: This article was originally published on September 9, 2013. Transportation debates, from the local to national level, are invariably waged between competing interests. There are players representing economic development, road construction, the environmental lobbies, and diverse groups of transportation users — just to name a few. But there is also an important role for independent experts to play — not just as honest brokers, but as analysts who can assess what they learn from the increasingly sophisticated collection of data about travel and human behavior. And this is where academics can step in. Research that I have conducted with colleagues at the University of Minnesota has allowed us break down travel behavior and draw some surprising lessons that can guide transportation policy. Why are these lessons so valuable now? Technology has brought us to the point where we can provide incentives and disincentives to efficiently manage road use. To take just one example, look at the pervasive issue of congestion, which can be addressed through “congestion pricing.” To be sure, the cost of collecting a new road fee is non-trivial, especially compared with the alternative, a higher gas tax, which simply requires an annual check of refinery sales. But the benefits are a significant improvement in the management of road use, so that drivers who do not need to travel when roads are congested will have an incentive to avoid those peak times. If applied correctly, the resulting changes in route choices reveal where roads are overbuilt, and where demand, even after pricing, is sufficient to justify new capacity. In short, the most cost-effective thing we can do in the transportation field is to get the prices right. Once we do that, everything else will follow. Above all, this requires field experiments that test and evaluate different strategies and deploying those that are successful. I will elaborate on some of my experiments below, but will start by asking some basic questions about how people travel.  Do people take the shortest path? This is the very first question we need to ask, because we need to know whether travelers really do think rationally as they chart their commute. And our experiments showed that they do not: Only 15% of commuters take the shortest path for work, while a greater number take a path that is marginally longer. And many take routes that are up to 10 minutes longer than the shortest path. For non-commute trips, which tend to be a little bit shorter, more people take the shortest route. But even though you would expect that people making the same trip every day would know what their travel network looks like, they either chose to not take the shortest path, or they do not know what that route is. It is important to make this point up front, because a misconception among transportation modelers is that people inherently take the shortest travel-time route when they are navigating on roads, or that the reality is only slightly different from this simplifying assumption. This notion, in fact, is embedded in the travel demand models that are used in every transportation-planning and forecasting exercise in large metropolitan areas. The data, however, show this is not true. Our findings thus challenge the computerized travel demand models that are used daily to predict the effects of network changes (e.g., adding a lane), land uses (e.g., developing a surface parking lot), and policies (e.g., raising the price of gas) on levels of traffic and subsequent delays. One of the key components of these models is called “route assignment” — where the model tells traffic which route to take — or “route choice” — if we imagine the model predicts which route users will choose to maximize their utility. How did we set up this experiment? We looked at how driving patterns had changed after the I-35 W bridge collapsed in 2007. A few weeks prior to the reopening of the bridge in 2008, we installed GPS units in 200 private vehicles owned by study participants, and told them to drive as they normally would. We did not give them any other instructions except that they had to come to a designated location to get the GPS unit installed, and then return them eight weeks later. These people worked at or near the University of Minnesota or in downtown Minneapolis, and therefore were likely to be affected by the change in the network associated with the bridge. We needed to know what the real shortest path was in a given network — which required travel time data on all road segments — and which paths people actually used. While people might tell you in a survey they are going from A to B, we did not know what particular routes that they were using — and many people could not accurately answer anyway. But with the advent of GPS systems and more pervasive traffic monitoring, we could get better data. With the help of my research assistant Shanjiang Zhu (now a professor at George Mason University) we then organized the data. We had to make sure that GPS points fell on the network and that people were driving on the right side of the road. We matched this data to routes, so for each individual trip we could track where it started, where it ended, and the specific road segments that were taken. We used this very large data set to estimate the travel time on all of the relevant links in the network. In addition to knowing which route that someone actually took, we measured the expected travel time on many of the alternative routes that a traveler might consider, since other travelers used those roads. The advantage of the new GPS data is that it gives the speeds on the arterials at any given time. So we compared the...
Why Write the History of Capitalism?
December 30th, 2013, 05:59 AM
bank run
A new generation of scholars is rewriting the story of capitalism by shaking off the old assumptions of both the Left and Right.   Note: This article was originally published on July 8, 2013. Earlier this spring, I received a phone call from a reporter at The New York Times. Since I have written a couple books on the history of American personal debt, the occasional inquiry from journalists was not out of place, but usually they want to hear about the five best financial tips for success, not “real” history. This particular journalist, Jennifer Schuessler, asked me a very odd question: What does it mean to write the history of capitalism? I was dumbfounded. I paused. I asked her where she had even heard that term. She evaded the answer — “oh, it’s in the air” — but I began to tell her about where I thought the burgeoning subfield had come from, peppering my response with “agency,” “contingency” and other history jargon. She told me she could translate. As I spoke, I kept wondering why she cared. After all, The New York Times does not usually run stories on the subfields of academic disciplines, especially history. So you can imagine my surprise when I woke up the next Sunday and saw the front-page headline: “In History Departments, It’s Up With Capitalism.” For days, it was the most emailed story on the Times web site, with hundreds of people suddenly weighing in to comment on what capitalism meant. The discussion forums were, in many ways, more revealing than the article itself. Internet trolls had their say, but I was struck much more by the forums’ threads of disagreement. Many readers pointed out what they thought all the scholars has missed or excluded, all in an effort to determine whether we were pro-corporate apologists funded by big money (no) or communist “fifth columnists” (a more interesting charge, but again, no). For me, the ad hominem attacks were less telling than the fact that there was simply a fresh discussion of capitalism. For most of the readers who weighed in, capitalism is totally explained by either Karl Marx or Adam Smith (with the occasional John Maynard Keynes or Joseph Schumpeter tossed in). That is, capitalism is a system that can be universally explained through one theory or the other. Either you understand it or you do not. Either you read the right author or you are an ignoramus. In this view, the history of capitalism is simply the logical unfolding of a natural law, like an apple falling from a tree. As one reader put it, “a history of capitalism would be as revelatory as a ‘history of gravity.’” If only events befell us as predictably as Isaac Newton’s proverbial apple. History is not about proving a universal theory, but seeing how change occurs over time. As a scholarly practice, history is about explaining how events actually played out, with all their attendant unruliness. The essential problem is not to primly define capitalism like a schoolmarm, but to think about why capitalism, which appears to be so simple, evades easy definitions. And in the last decade, there has been a renewed interest among historians in not only challenging existing definitions, but in historicizing that very untidiness (much to the consternation of nominalists everywhere). As the United States emerges from the most severe financial crisis since the Great Depression, the sudden urgency is not difficult to understand. Booms and busts buffet us with alarming frequency. But it is important to note that the term “history of capitalism” began to assume a currency in the historical profession sometime in the mid-2000s, between the tech crash and the Great Recession. While the Recession has sparked renewed interest from the public, the new work preceded 2008 and marked an important shift that was not just intellectual but generational. For two generations, almost no historians who wanted to make a name for themselves worked on economic questions. New Left scholars of the 1960s and 1970s emphasized movements that fought for social change (labor, women, and African-Americans). The postmodern shift of the 1980s and 1990s pushed traditional subjects of economic history out of the field, and with it the stillborn subfield of cliometrics – a quantitative approach to economic history. If a scholar wrote about the history of business, or even worse, businessmen, he or she seemed to betray right-wing tendencies. If you wrote about actual businesses, many on the Left felt it was only to celebrate their leaders, the way that most historians wrote celebratory histories of the oppressed. Some stalwarts remained (of all political persuasions), but on the whole, they were marginalized. By contrast, for the generation of graduate students that came of age in the late 1990s and 2000s, the world looked very different. Social movements had either won — or lost — decades earlier. Radical reform, in the midst of seemingly unending economic stagnation, seemed a fantasy. Most importantly, American capitalism, as of 1989, had beaten Soviet communism. The either/or distinctions of the Cold War seemed less relevant. The questions that motivated so much of social history seemed naïve. The old question “Why is there no socialism in America?” became “Why do we even talk about socialism at all since we are in America?” We knew endless amounts about deviationist Trotskyites but nothing about hegemonic bankers. This gap came from the belief that there was very little to know. Alfred Chandler’s The Visible Hand was the only business history book most American graduate students of history continued to read. And it reaffirmed everything that the New Left thought about capitalism: that it was inevitable, mechanical, efficient, and boring. Capitalists operated with an inexorable logic, whereas the rest of us were “contingent agents” pursuing our free will. If pressed, few scholars would have put this assumption in these words, but it colored the questions that people asked. “Hegemony,” a term appropriated from Antonio Gramsci by cultural studies scholars in the 1970s, became diluted into...
A Scientist Goes Rogue
December 30th, 2013, 05:59 AM
perlstein
Can social media and crowdfunding sustain independent researchers?   Note: This article was originally published on August 5, 2013. Ethan Perlstein is a contradiction: an utterly modern researcher who hearkens to the 19th century tradition of the “gentleman scientist.” Perlstein, a self-dubbed evolutionary pharmacologist with a Ph.D. in molecular and cellular biology from Harvard, is one of the most vocal members of the so-called independent scientist movement. As with many trailblazers, he had no intention of starting a revolution; rather, as he puts it, “my back was to the wall.” That wall presented itself in the summer of 2012, when he was completing his fifth year as a postdoctoral fellow at Princeton. “I’d already gone through one year of the application cycle for assistant professorships and ran into a buzz-saw, because for one job opening, there are 300-400 applicants. I was preparing for a second bid,” he recalled. “I was told, ‘two years is nothing on the academic job market these days; you could be spending four years on the market. One postdoc is not enough these days; you need two postdocs.’ I just realized, I don’t want to do this. I want to do my science.” The seeds for going rogue had been planted already on Twitter, where scientists were openly and honestly kvetching in a way that only really happens on social media. Some tweets were grim, such as: “80% of PhDs in biology don’t end up on the tenure track.” (For more on this, see Perlstein’s clever blog post, “The Tenure Games.”) Until January 2011, he had no interest in Twitter, but once he created an account and started connecting with other scientists, he began to learn about alternative tracks for people in his situation. “People were talking about new ways to publish and review papers after publication, crowdfunding and all these alternative things, so I educated myself on these trends.” He started to study the history of independent scientists and discovered that “it goes back to the gentleman scientist tradition, like Darwin. I thought, I don’t really want to resurrect that tradition of the male-dominated, aristocratic leader class, but they did come up with huge discoveries.” As the term gentleman scientist implies, those people had money to play around with. Perlstein said, “The biggest stumbling block for someone who’s not a theorist in biology is that it’s so expensive to maintain a lab, and the supplies to use in that lab.” Perlstein’s specific area of biomedical research is particularly costly. So he made a very web 2.0 move: crowdfunding. Perlstein cited a tweet he read recently, which called crowdfunding the “gateway drug” of the independent scientist. “I think there’s a ring of truth to that,” he added. In September 2012, Perlstein decided to start a meth lab for mice to find out where radioactive amphetamines accumulate in mouse brain cells. He launched a crowdfunding campaign on the site Rockethub, a kind of Kickstarter for science for academic projects. The tag line, “Crowdfund my meth lab, yo,” was accompanied by a photo from Breaking Bad, about a teacher who runs a meth lab. The goal: to raise $25,000. It was hip. It was bold. It was youthful. It was as good as an example as any of how wide the gap is between academic scientists and independent scientists, reminiscent of Steve Jobs circa 1975 versus IBM of the same era. It is a safe bet that some of his former peers thought the move populist or unbecoming of an academic, particularly with the Breaking Bad allusions. But it worked: He raised $25,460 from over 400 people. And yes, as with Kickstarter, he offered little thank-you gestures to his donors, including “a 3-D printed model of methamphetamine the size of an iPhone that kind of looks like a dreidl.” He prints it himself on a 3-D printer. It is blue, a nod to Breaking Bad: “In the show they talk about blue crystal,” he explained. Trinkets aside, Perlstein is publishing the results of his research on his web site, in real time, rather than sitting on data for journal publication. One of the most controversial aspects of Perlstein’s independent scientist concept is that research transparency is key. “Crowdfunding could be one of the pillars supporting independent scientists, but it only works if you tell people what you are doing with their money.” That is where many scientists tempted to take the Perlstein route would stop short and possibly turn back. There appear to be too many risks, including the possibility of someone else stealing the idea. Perlstein laughs in the face of such fears. “Being an independent scientist is self-liberation from the constant paranoia that someone will steal [your idea]. My answer to people is that if you’re working in an area that is so faddish, you should think about working in a different area.” It’s not just chutzpah. He also thinks that stealing ideas is not as feasible as people seem to think. “We’re taking a technique in pharmacology that was developed decades ago. Someone could have done this at any time since then, but no one has. My talking about it now is not going to make someone say, ‘We’re going to do it.’ And even if they were to try to scoop us, they’re not going to do it overnight. They’re going to go through the same growing pains of getting preliminary data.” And here is the part where he starts to channel the spirit of his 19th century independent scientist forbearers: He is a purist. He’s out to find cures for rare diseases, among which he includes Cystic Fibrosis, Tay-Sachs, and Parkinson’s–all of which are relatively neglected by big drug companies. In the end, it is the science that matters. “Of course, someone could be doing in parallel and in stealth what we’re doing, but who cares? If we do the same experiment and independently get the same result, that’s the scientific method. Isn’t that the whole point? Getting the same result no...
Why U.S. Financial Hegemony Will Endure
December 30th, 2013, 05:59 AM
dollar
The great financial crisis of 2008 convinced many in the markets and policy arena that the U.S. had reached its high-water mark of dominance and that its decline was sealed. As they saw it, American financial prominence had proven so destabilizing that other countries had to insulate themselves against “profligate” U.S. behavior. Furthermore, the crisis dramatically reduced U.S. attractiveness to global capital, weakening its financial power to such an extent that the U.S. would be severely constrained in its ability to finance government debt at home and pursue geopolitical projects abroad. As a result, many in this camp have anticipated a restructuring of the international financial system away from New York and toward China and other emerging markets. According to the World Economic Forum, Hong Kong displaced the United States as the world’s leading financial center last year. Many would agree with the economist Arvind Subramanian, who has argued that by 2030 China will be the world’s sole superpower, and that it is already the “world’s largest banker.” These assessments see power as a result of the internal attributes of national economies: large economies with attractive financial sectors have power, while weaker ones do not. Accordingly, the U.S. decline in the share of global trade and income, and its domestic financial instability, should diminish its influence. But this focus fails to consider the ways in which the global financial network is, in fact, a complex and adaptive system. Power within this system does not depend solely on domestic attributes, but on the distribution of financial relationships that exists globally. In other words, the most well-connected economies, not just the biggest, are the most powerful. By extension, change within this structure does not follow a linear process, and economies that are initially more advantaged will continue to grow as the system develops.
History Versus Hagiography
November 3rd, 2013, 05:59 AM
palatucci award
Among the many dilemmas a professional historian will face in the course of his or her work, few are as vexing as the question of how, or even if, moral judgement fits with historical interpretation. This is especially true of inescapably controversial figures or episodes in the past that seem to demand of the historian some moral and ethical insight or conclusion, especially if the past is still alive in contemporary memory. That is, they demand some lesson about the deeper or ultimate meaning of the historical question under study beyond a mere empirical narration of facts or the logical explanation of cause and effect. I will talk about a case — the Vatican’s role during the Holocaust — to make some broader points about moral judgement. I do not believe historians ought to avoid it per se, as if historical interpretation were somehow amoral. But we need to understand why it is so important to differentiate the stages of scholarly inquiry, and especially how a full and proper historical interpretation can inform moral judgement about past events and their meanings. We must let history do its essential task of showing what happened and why, so that we can then conduct a reasonable, informed analysis about what might have been, and what ought to be. Few questions are thornier than the issue of papal intervention, or lack thereof, on behalf of persecuted Jews during the Holocaust. Arguably the most contentious claims reflect competing narratives about the presumed role of the pope and the Vatican in rescue and relief initiatives on behalf of Jews, especially in Italy, and Rome in particular. Narratives of papal rescue and relief often blur the lines between wartime experiences and their framing in postwar memory. Nowhere is this more evident than in the self-congratulatory narrative attributing to Pius XII a decisive role as “rescuer” – a narrative that the Vatican itself crafted before the war had even ended. Sensitive to charges of papal inaction on behalf of persecuted Jews, senior papal diplomats offered specific examples of the thousands of Jews in Rome — up to 6,000 — who had been given “refuge and succor” by the Vatican during German occupation of the city, primarily in the form of material aid, asylum, and safe passage. This narrative also came from Pius XII himself, who utilized self-ascribed claims of rescue and relief to justify his policy of impartiality and cautious public diplomacy. It was also useful in deflecting the constant entreaties reaching the pope during the war, very often from other ecclesiastical authorities, for the Vatican to do more for persecuted European Jews. Immediately after the war, the pope and senior advisors saw diplomatic advantage in publicizing the many public expressions of Jewish gratitude. This, in turn, set the stage for a similar response in the 1960s to Rolf Hochhuth’s The Deputy, a drama about the pope’s role during the Holocaust, first performed in Berlin in February 1963, five years after Pius’ death. Although it was a fictionalized historical account, Hochhuth’s play sparked a dramatic rethinking of Pius XII’s wartime role. More than any single work of sound historical interpretation, Hochhuth’s work cast the indelible image of the wartime pope as a moral coward and political failure whose cautious diplomatic approach played into Hitler’s murderous hands. To this day, Pius apologists are still wrestling with the ghosts stirred by Hochhuth’s Deputy. Typically, they point to the many Jews after the war who expressed gratitude for papal rescue and relief during the Holocaust. What these apologists present us with is a selective arrangement of historical fragments, which they construe as persuasive vindication of the wartime pontiff’s decision-making. In this respect, the apologists’ account represents mythology and hagiography than critical history. The problem permeates scholarship in the field. Indeed, one is struck by how often in the literature on Pius XII we find a juxtaposition of “supporters” and “defenders” pitted against “critics” and “skeptics.” The former make untenable claims that the pope and the Vatican played a decisive role in saving several hundred thousand Jews during the Holocaust. The most exaggerated of these – which even some respectable scholars and the Vatican repeat – have achieved the status of established fact in apologetic circles, all the more because they come from Jewish sources. This camp would have it that upwards of 800,000 Jews were saved during the Holocaust by means of direct or indirect papal intervention. That said, few scholars lend serious credence to this claim, given the specious method by which it was derived, not to mention the apologetic-polemical end to which that inflated figure has been used. However, other longstanding claims of papal assistance are more credible and warrant sustained, critical scholarly attention, if only to place them in a proper context. As I argue in my book, Soldier of Christ, there is ample evidence to show that the pope and his advisors did authorize or tacitly allow papal representatives and ecclesiastical entities around the world to mobilize their resources to help those facing persecution. This was hardly tantamount to a policy or a directive of Jewish rescue and relief, and it certainly does not stand as evidence of an intentional scheme to furtively mobilize church resources on a massive scale to help persecuted Jews. Still, it was a measure of decisive assistance just the same. The challenge is finding a framework for calibrating that assistance in quantitative and qualitative ways. As an example of just how complex this question is, we can look to the controversy this past summer over the wartime record of Giovanni Palatucci, an Italian police official long regarded as a righteous rescuer but now implicated by new research as a possible collaborator in the Holocaust. In the span of a few short months, an established version of history was called into question as mythology. On one side we have the established public memory of Palatucci – an ordinary Catholic rescuer who did extraordinary things to save Jewish lives during the Holocaust in...
Can Corporations Be Good Citizens?
November 3rd, 2013, 05:59 AM
campaign finance reform rally
The reputation of big business has taken blow after blow in the last few years. The global financial crisis revealed the risks to the economy of Wall Street excess. The Deepwater Horizon oil spill showed the dangers to the environment of corporate decisions that externalized the possibility of serious harm. The explosion of corporate expenditures in the 2012 election cycle indicated that corporations were attempting to exert their influence over our democratic life. Americans are terribly skeptical of big business, and probably increasingly so. According to a 2012 Gallup poll, Americans’ satisfaction with the size and influence of big business is near record lows, and has fallen by 40 percent in the last decade. This skepticism is feeding a lively debate — largely between two camps on the ideological left — on about how to take advantage of this moment to rein in corporate power. Although both camps distrust corporations, they are fundamentally at odds over not only possible remedies, but the nature of the problem. The crucial difference is over what might be called corporate “citizenship.” One camp sees corporate power as something that can be used constructively; the other sees it as the evil to be corrected. For decades, there has been a vocal minority of corporate law scholars (including myself) who have challenged the American corporation to broaden its role in society and enlarge the obligations it owes beyond the bottom line. These scholars have assailed the norm of shareholder primacy and called on corporations to recognize and act on the interests of all stakeholders — view sometimes called “stakeholder theory.” These critics, in effect, call on corporations to act as if they were players not only in the private sphere but in the public one as well. To act, one might say, as citizens. To call on corporations to act as “good corporate citizens” means that they should act as if they have broader obligations to the polity and society that cannot be entirely satisfied by reference to their financial statements. Meanwhile, a separate camp of corporate critics — less academic and more activist — challenges the corporation to stay within a narrow economic sphere. Corporate activity in politics and the public sphere is viewed skeptically, even hatefully. The most pertinent example of these beliefs is the current effort to amend the Constitution to take away corporate “personhood.” The thought of corporations acting as “citizens” — whether for progressive ends or not — is seen as, at best, nonsensical and, at worst, destructive to democracy. This camp also strenuously argues against the 2010 Supreme Court ruling in Citizens United v Federal Election Commission, which unleashed corporate political expenditures, and it has pushed for tougher campaign-finance legislation so that corporate political influence is circumscribed. It has gone unnoticed until now that the work of the pro-corporate citizenship scholars often directly conflicts with the work of the anti-corporate personhood activists. The arguments of those opposing corporate constitutional rights contradict and undermine the efforts of those who call on corporations to take a more active role in society to protect the interests of all corporate stakeholders, and vice versa. For the anti-personhood activists, the remedy is to keep corporations within a narrow purview; for the corporate citizenship scholars, the remedy is to ask the corporation to acknowledge and accept a broader range of obligations. The core tenets of the progressive corporate law movement include the principles that shareholders are not supreme, and that corporations should be measured by more than economic measures. Anti-corporate personhood activists, meanwhile, often argue for limiting corporate rights by pointing out that the shareholder owners should be protected from managerial misuse of their funds, and that corporations should not themselves engage politically because they have only economic natures. This latter view surfaced in the Citizens United ruling itself, in which Justice John Paul Stevens penned the lead dissent and argued that corporate speech should be limited to protect shareholders’ investments. He saw shareholders as owners, as “those who pay for an electioneering communication,” and who “invested in the business corporation for purely economic reasons.” Moreover, Stevens argued that corporate political speech did not merit protection because “the structure of a business corporation … draws a line between the corporation’s economic interests and the political preferences of the individuals associated with the corporation; the corporation must engage the electoral process with the aim to enhance the profitability of the company, no matter how persuasive the arguments for a broader … set of priorities.” Stevens even quoted the controversial American Law Institute Principles of Corporate Governance: “[A] corporation … should have as its objective the conduct of business activities with a view to enhancing corporate profit and shareholder gain.” It looks like the opponents of Citizens United are so convinced of the dangers of corporate political activity that they are ready to throw stakeholder theory under the bus as part of their broader fight. But the difficulties run the other way as well. Case in point: the work of stakeholder theorists is now being cited to bolster the arguments of those seeking broader constitutional protections for corporations. The best current example is in the context of the recent suits brought by certain corporations to challenge the portion of the Affordable Care Act that requires employers to provide employee health insurance that covers contraceptive care. As many as 60 lawsuits are now pending across the country, and two — one from the Tenth Circuit and one from the Third — have already made it to the certiorari stage at the Supreme Court. These cases turn on the question of whether corporations may assert religion-based conscientious objections to the contraceptive mandate. That question depends in part on whether the corporations can have purposes and obligations that extend beyond the economic sphere. The irony in these cases is that the corporations, asserting an ideologically conservative argument to be free of government regulation, are using arguments often made by progressive stakeholder theorists. For example, in the Tenth Circuit decision upholding...
An Academic Meets Public Life
November 3rd, 2013, 05:59 AM
Holt
Rep. Rush Holt (D-NJ) has represented the Garden State’s 12th District since 1999. As one of the only two physicists in Congress, he began his career in academia. After teaching at Swarthmore College and working on arms control at the State Department, he became Assistant Director of the Princeton Plasma Physics Laboratory in 1989. He is perhaps most famous for beating the IBM computer Watson in “Jeopardy” in a demonstration match in 2011. Symposium Magazine interviewed him in October.   How well did academia prepare you for politics? Academia was far more useful than one might think. There are so many topics in public policy that involve some science, and this is what helped me most before coming to the Hill. Often, there are cases where science is “embedded” in an important policy issue and people don’t even know it. Take election reform or voting rights – most people don’t think of these as technical issues, but they definitely have scientific components. If you look at hearings on the Hill on any given day, at least half will have something related to “embedded” science. But chances are, these will have no witnesses from the scientific community. Science is often avoided. This is one the biggest gaps in understanding policy we have today. Do you have examples beyond science, where your background helped? Absolutely. Even though I taught physics, I also once co-taught a course on arms control with a religion professor, and we both talked about just-war theory and arms. Another course I co-taught – with a professor of math and a professor of psychology – addressed the question of how to make decisions in the face of uncertainty. We never got that one quite right, but I think some students enjoyed it. I also often held small seminars with students at my house, where we went into far-reaching policy talks. Beyond science, there are many disciplines that have policy relevance and that get overlooked here on the Hill. Take the recent shutdown, and the subject of history. How much did we talk about past examples of shutdowns? Or more broadly, what does history show us of examples of a minority party trying to wield power beyond its numbers? It’s not as if people were bringing up, say, what the Federalist Papers said on this topic. But it would have been useful to add to the debate. What about academic life more broadly? Can that lifestyle prepare you for politics? I’ll have to say up front: Nothing in academia can really prepare you for politics. I think being a Representative is much harder – intellectually, physically, and psychologically – than being a professor. Intellectually, you have to constantly learn about a whole range of subjects that you may not know anything about, and then make policy decisions. You don’t just learn “more and more about less and less,” as the saying goes. You need to know something about everything. Physically, the demands of campaigning are grueling. And imagine what it’s like trying to stay in touch with 730,000 constituents, rather than several hundred students. Nothing in academia is like that. But the psychological part is perhaps the toughest.  You don’t just have competitors in your department or field – you have people trying to undo you. It’s not just a race to get a paper out or come up with a new clever idea. You always have to be on guard against those who want to undo you. What would you tell fellow lawmakers on how they can have a useful relationship with academic research? And what should academics do to emphasize the broader relevance of their work? Members of Congress would be better off if there was more quantitative understanding of policy. There is very little of that, and we have such complicated issues we need to understand. But of course, this goes both ways. I think almost any academic work has some public implications, and academics should understand that. Even in the humanities and classics, you can do that — although it is, of course, harder. For every policy issue out there, there are academic studies that would illuminate it. The work is there, and lawmakers just need to read it. And I would tell academics who want to make a bigger difference: just do it. Use whatever time, whatever tools you have to get your research out. Or even run for office!