Hunkier than thou

Sexual selection

Scientists are finally succeeding where so many men have failed: in understanding why women find some guys handsome and others hideous

WHEN it comes to partners, men often find women’s taste fickle and unfathomable. But ladies may not be entirely to blame. A growing body of research suggests that their preference for certain types of male physiognomy may be swayed by things beyond their conscious control—like prevalence of disease or crime—and in predictable ways.

Masculine features—a big jaw, say, or a prominent brow—tend to reflect physical and behavioural traits, such as strength and aggression. They are also closely linked to physiological ones, like virility and a sturdy immune system.

The obverse of these desirable characteristics looks less appealing. Aggression is fine when directed at external threats, less so when it spills over onto the hearth. Sexual prowess ensures plenty of progeny, but it often goes hand in hand with promiscuity and a tendency to shirk parental duties or leave the mother altogether.

So, whenever a woman has to choose a mate, she must decide whether to place a premium on the hunk’s choicer genes or the wimp’s love and care. Lisa DeBruine, of the University of Aberdeen, believes that today’s women still face this dilemma and that their choices are affected by unconscious factors.

In a paper published earlier this year Dr DeBruine found that women in countries with poor health statistics preferred men with masculine features more than those who lived in healthier societies. Where disease is rife, this seemed to imply, giving birth to healthy offspring trumps having a man stick around long enough to help care for it. In more salubrious climes, therefore, wimps are in with a chance.

Now, though, researchers led by Robert Brooks, of the University of New South Wales, have taken another look at Dr DeBruine’s data and arrived at a different conclusion. They present their findings in the Proceedings of the Royal Society. Dr Brooks suggests that it is not health-related factors, but rather competition and violence among men that best explain a woman’s penchant for manliness. The more rough-and-tumble the environment, the researcher’s argument goes, the more women prefer masculine men, because they are better than the softer types at providing for mothers and their offspring.

An unhealthy relationship

Since violent competition for resources is more pronounced in unequal societies, Dr Brooks predicted that women would value masculinity more highly in countries with a higher Gini coefficient, which is a measure of income inequality. And indeed, he found that this was better than a country’s health statistics at predicting the relative attractiveness of hunky faces.

The rub is that unequal countries also tend to be less healthy. So, in order to disentangle cause from effect, Dr Brooks compared Dr DeBruine’s health index with a measure of violence in a country: its murder rate. Again, he found that his chosen indicator predicts preference for facial masculinity more accurately than the health figures do (though less well than the Gini).

However, in a rejoinder published in the same issue of the Proceedings, Dr DeBruine and her colleagues point to a flaw in Dr Brooks’s analysis: his failure to take into account a society’s overall wealth. When she performed the statistical tests again, this time controlling for GNP, it turned out that the murder rate’s predictive power disappears, whereas that of the health indicators persists. In other words, the prevalence of violent crime seems to predict mating preferences only in so far as it reflects a country’s relative penury.

The statistical tussle shows the difficulty of drawing firm conclusions from correlations alone. Dr DeBruine and Dr Brooks admit as much, and agree the dispute will not be settled until the factors that shape mating preferences are tested directly.

Another recent study by Dr DeBruine and others has tried to do just that. Its results lend further credence to the health hypothesis. This time, the researchers asked 124 women and 117 men to rate 15 pairs of male faces and 15 pairs of female ones for attractiveness. Each pair of images depicted the same set of features tweaked to make one appear ever so slightly manlier than the other (if the face was male) or more feminine (if it was female). Some were also made almost imperceptibly lopsided. Symmetry, too, indicates a mate’s quality because in harsh environments robust genes are needed to ensure even bodily development.

Next, the participants were shown another set of images, depicting objects that elicit varying degrees of disgust, such as a white cloth either stained with what looked like a bodily fluid, or a less revolting blue dye. Disgust is widely assumed to be another adaptation, one that warns humans to stay well away from places where germs and other pathogens may be lurking. So, according to Dr DeBruine’s hypothesis, people shown the more disgusting pictures ought to respond with an increased preference for masculine lads and feminine lasses, and for the more symmetrical countenances.

That is precisely what happened when they were asked to rate the same set of faces one more time. But it only worked with the opposite sex; the revolting images failed to alter what either men or women found attractive about their own sex. This means sexual selection, not other evolutionary mechanisms, is probably at work.

More research is needed to confirm these observations and to see whether other factors, like witnessing violence, bear on human physiognomic proclivities. For now, though, the majority of males who do not resemble Brad Pitt may at least take comfort that this matters less if their surroundings remain spotless.


Full article and photo:

The truth about suicide bombers

Are they religious fanatics? Deluded ideologues? New research suggests something more mundane: They just want to commit suicide.

Qari Sami did something strange the day he killed himself. The university student from Kabul had long since grown a bushy, Taliban-style beard and favored the baggy tunics and trousers of the terrorists he idolized. He had even talked of waging jihad. But on the day in 2005 that he strapped the bomb to his chest and walked into the crowded Kabul Internet cafe, Sami kept walking — between the rows of tables, beyond the crowd, along the back wall, until he was in the bathroom, with the door closed.

And that is where, alone, he set off his bomb.

The blast killed a customer and a United Nations worker, and injured five more. But the carnage could have been far worse. Brian Williams, an associate professor of Islamic studies at the University of Massachusetts Dartmouth, was in Afghanistan at the time. One day after the attack, he stood before the cafe’s hollowed-out wreckage and wondered why any suicide bomber would do what Sami had done: deliberately walk away from the target before setting off the explosives. “[Sami] was the one that got me thinking about the state of mind of these guys,” Williams said.

Eventually a fuller portrait emerged. Sami was a young man who kept to himself, a brooder. He was upset by the US forces’ ouster of the Taliban in the months following 9/11 — but mostly Sami was just upset. He took antidepressants daily. One of Sami’s few friends told the media he was “depressed.”

Today Williams thinks that Sami never really cared for martyrdom; more likely, he was suicidal. “That’s why he went to the bathroom,” Williams said.

The traditional view of suicide bombers is well established, and backed by the scholars who study them. The bombers are, in the post-9/11 age, often young, ideologically driven men and women who hate the laissez-faire norms of the West — or at least the occupations and wars of the United States — because they contradict the fundamentalist interpretations that animate the bombers’ worldview. Their deaths are a statement, then, as much as they are the final act of one’s faith; and as a statement they have been quite effective. They propagate future deaths, as terrorist organizers use a bomber’s martyrdom as propaganda for still more suicide terrorism.

But Williams is among a small cadre of scholars from across the world pushing the rather contentious idea that some suicide bombers may in fact be suicidal. At the forefront is the University of Alabama’s Adam Lankford, who recently published an analysis of suicide terrorism in the journal Aggression and Violent Behavior. Lankford cites Israeli scholars who interviewed would-be Palestinian suicide bombers. These scholars found that 40 percent of the terrorists showed suicidal tendencies; 13 percent had made previous suicide attempts, unrelated to terrorism. Lankford finds Palestinian and Chechen terrorists who are financially insolvent, recently divorced, or in debilitating health in the months prior to their attacks. A 9/11 hijacker, in his final note to his wife, describing how ashamed he is to have never lived up to her expectations. Terrorist recruiters admitting they look for the “sad guys” for martyrdom.

For Lankford and like-minded thinkers, changing the perception of the suicide bomber changes the focus of any mission that roots out terrorism. If the suicide bomber can be viewed as something more than a brainwashed, religiously fervent automaton, anticipating a paradise of virgins in the clouds, then that suicide bomber can be seen as a nuanced person, encouraging a greater curiosity about the terrorist, Lankford thinks. The more the terrorist is understood, the less damage the terrorist can cause.

“Changing perceptions can save lives,” Lankford said.

Islam forbids suicide. Of the world’s three Abrahamic faiths, “The Koran has the only scriptural prohibition against it,” said Robert Pape, a professor at the University of Chicago who specializes in the causes of suicide terrorism. The phrase suicide bomber itself is a Western conception, and a pretty foul one at that: an egregious misnomer in the eyes of Muslims, especially from the Middle East. For the Koran distinguishes between suicide and, as the book says, “the type of man who gives his life to earn the pleasure of Allah.” The latter is a courageous Fedayeen — a martyr. Suicide is a problem, but martyrdom is not.

For roughly 1,400 years, since the time of the Prophet Muhammad, scholars have accepted not only the ubiquity of martyrdom in the Muslim world but the strict adherence to its principles by those who participate in it: A lot of people have died, and keep dying, for a cause. Only recently, and sometimes only reluctantly, has the why of martyrdom been challenged.

Ariel Merari is a retired professor of psychology at Tel Aviv University. After the Beirut barracks bombing in 1983 — in which a terrorist, Ismalal Ascari, drove a truck bomb into a United States Marine barracks, killing 241 American servicemen — Merari began investigating the motives of Ascari, and the terrorist group with which the attack was aligned, Hezbollah. Though the bombing came during the Lebanese Civil War, Merari wondered whether it was less a battle within the conflict so much as a means chosen by one man, Ascari, to end his life. By 1990, Merari had published a paper asking the rest of academia to consider if suicide bombers were actually suicidal. “But this was pretty much speculative, this paper,” Merari said.

In 2002, he approached a group of 15 would-be suicide bombers — Palestinians arrested and detained moments before their attacks — and asked if he could interview them. Remarkably, they agreed. “Nobody” — no scholar — “had ever been able to do something like this,” Merari said. He also approached 14 detained terrorist organizers. Some of the organizers had university degrees and were intrigued by the fact that Merari wanted to understand them. They, too, agreed to be interviewed. Merari was ecstatic.

Fifty-three percent of the would-be bombers showed “depressive tendencies” — melancholy, low energy, tearfulness, the study found — whereas 21 percent of the organizers exhibited the same. Furthermore, 40 percent of the would-be suicide bombers expressed suicidal tendencies; one talked openly of slitting his wrists after his father died. But the study found that none of the terrorist organizers were suicidal.

The paper was published last year in the journal Terrorism and Political Violence. Adam Lankford read it in his office at the University of Alabama. The results confirmed what he’d been thinking. The criminal justice professor had published a book, “Human Killing Machines,” about the indoctrination of ordinary people as agents for terrorism or genocide. Merari’s paper touched on themes he’d explored in his book, but the paper also gave weight to the airy speculation Lankford had heard a few years earlier in Washington, D.C., while he was earning his PhD from American University. There, Lankford had helped coordinate antiterrorism forums with the State Department for high-ranking military and security personnel. And it was at these forums, from Third World-country delegates, that Lankford first began to hear accounts of suicide bombers who may have had more than martyrdom on their minds. “That’s what sparked my interest,” he said.

He began an analysis of the burgeoning, post-9/11 literature on suicide terrorism, poring over the studies that inform the thinking on the topic. Lankford’s paper was published this July. In it, he found stories similar to Merari’s: bombers who unwittingly revealed suicidal tendencies in, say, their martyrdom videos, recorded moments before the attack; and organizers who valued their lives too much to end it, so they recruited others, often from the poorest, bleakest villages.

But despite the accounts from their own published papers, scholar after scholar had dismissed the idea of suicidality among bombers. Lankford remains incredulous. “This close-mindedness has become a major barrier to scholarly progress,” Lankford said.

Not everyone is swayed by his argument. Mia Bloom is a fellow at the International Center for the Study of Terrorism at Penn State University and the author of the book, “Dying to Kill: The Allure of Suicide Terror.” “I would be hesitant to agree with Mr. Lankford,” she said. “You don’t want to conflate the Western ideas of suicide with something that is, in the Middle East, a religious ceremony.” For her, “being a little bit wistful” during a martyrdom video is not an otherwise hidden window into a bomber’s mind. Besides, most suicide bombers “are almost euphoric” in their videos, she said. “Because they know that before the first drop of blood hits the ground, they’re going to be with Allah.” (Lankford counters that euphoria, moments before one’s death, can also be a symptom of the suicidal person.)

One study in the academic literature directly refutes Lankford’s claim, and that’s the University of Nottingham’s Ellen Townsend’s “Suicide Terrorists: Are They Suicidal?” published in the journal Suicide and Life Threatening Behavior in 2007. (The answer is a resounding “no.”)

Townsend’s paper was an analysis of empirical research on suicide terrorism — the scholars who’d talked with the people who knew the attackers. In Lankford’s own paper a few years after Townsend’s, he attacked her methodology: relying as she did on the accounts of a martyr’s family members and friends, who, Lankford wrote, “may lie to protect the ‘heroic’ reputations of their loved ones.”

When reached by phone, Townsend had a wry chuckle for Lankford’s “strident” criticism of her work. Yes, in the hierarchy of empirical research, the sort of interviews on which her paper is based have weaknesses: A scholar can’t observe everything, can’t control for all biases. “But that’s still stronger evidence than the anecdotes in Lankford’s paper,” Townsend said.

Robert Pape, at the University of Chicago, agrees. “The reason Merari’s view” — and by extension, Lankford’s — “is so widely discredited is that we have a handful of incidents of what looks like suicide and we have over 2,500 suicide attackers. We have literally hundreds and hundreds of stories where religion is a factor — and revenge, too….To put his idea forward, [Lankford] would need to have a 100 or more stories or anecdotes to even get in the game.”

He’s working on that. Lankford’s forthcoming study, to be published early next year, is “far more robust” than his first: a list of more than 75 suicide terrorists and why they were likely suicidal. He cites a Palestinian woman who, five months after lighting herself on fire in her parents’ kitchen, attempted a return to the hospital that saved her life. But this time she approached with a pack of bombs wrapped around her body, working as an “ideologue” in the service of the al-Aqsa Martyrs Brigade.

Lankford writes of al Qaeda-backed terrorists in Iraq who would target and rape local women, and then see to it that the victims were sent to Samira Ahmed Jassim. Jassim would convince these traumatized women that the only way to escape public scorn was martyrdom. She was so successful she became known as the Mother of Believers. “If you just needed true believers, you wouldn’t need them to be raped first,” Lankford said in an interview.

Lankford is also intrigued by the man who in some sense launched the current study of suicide terrorism: Mohammed Atta, the ringleader behind the 9/11 hijacking. “It’s overwhelming, his traits of suicidality,” Lankford said. An isolated, neglected childhood, pathologically ashamed of any sexual expression. “According to the National Institute of Mental Health there are 11 signs, 11 traits and symptoms for a man being depressed,” Lankford said. “Atta exhibited eight of them.”

If Atta were seen as something more than a martyr, or rather something other than one, the next Atta would not have the same effect on the world. That’s Lankford’s hope anyway. But transporting a line of thought from the halls of academia to the chambers of Congress or onto field agents’ dossiers is no easy task. Lankford said he has not heard from anyone in the government regarding his work. And even if the idea does reach a broader audience in the West, there is still the problem of convincing those in the Middle East of its import. Pape, at the University of Chicago, said people in the Muslim world commit suicide at half the rate they do in the Jewish or Christian world. The act is scorned, which makes it all the more difficult to accept any behaviors or recurring thoughts that might lead to it.

Still, there is reason for Lankford to remain hopeful. The Israeli government, for one, has worked closely with Merari and his work on suicidal tendencies among Palestinian terrorists. Then there is Iraq. Iraq is on the verge of autonomy for many reasons, but one of them is the United States’ decision to work with Iraqis instead of against them — and, more fundamentally, to understand them. Lankford thinks that if the same inquisitiveness were applied to suicide bombers and their motives, “the violence should decrease.”

Paul Kix is a senior editor at Boston magazine and a contributing writer for ESPN the Magazine.


Full article and photo:

All the president’s books

In my two years working in the president’s office at Harvard, before I was laid off in spring, I gave myself the job of steward of her books. Gift books would arrive in the mail, or from campus visitors, or from her hosts when she traveled; books by Harvard professors were kept on display in reception or in storage at our Massachusetts Hall office; books flowed in from publishers, or authors seeking blurbs, or self-published authors of no reputation or achievement, who sometimes sent no more than loosely bound manuscripts.

I took charge of the president’s books because it was my assigned job to write thank-you letters for them. I would send her the books and the unsigned draft replies on presidential letterhead; for each one, she sent me back the signed letter and, most of the time, the book, meaning she had no further use for it. Some books she would keep, but seldom for very long, which meant those came back to me too, in one of the smaller offices on the third floor of Mass Hall where there was no room to put them. Furthermore they weren’t so easily disposed of. Often they bore inscriptions, to president Drew Faust or to her and her husband from people they knew; and even if the volume was something rather less exalted — a professor from India sending his management tome or a book of Hindi poems addressed, mysteriously, to “Sir” or to the “vice-chancellor of Harvard University” — these books obviously couldn’t end up in a secondhand bookshop or charity bin or anywhere they could cause embarrassment. All were soon moved to an overflow space at the very end of the hall, coincidentally looking out at a donation bin for books at a church across the street.

One might feel depressed sitting amid so many unwanted books — so much unread knowledge and overlooked experience — but tending president Faust’s books became my favorite part of the job. No one noticed or interfered in what I did, which in a president’s office like Harvard, where everything is scrutinized, is uncommon. Even a thank-you note can say too much. I developed my own phrase for these notes — “I look forward to spending some time with it” — as a substitute for saying “I look forward to reading it,” because the president can’t possibly read all the books she receives, and there was always the chance she would run into the author somewhere, who might ask if she’d read his book yet.

Any Harvard president attracts books from supplicants, and this particular president attracted her own subcategory. Many books came from publishers or authors not at all shy about requesting a presidential blurb. These were easy to decline, and became easy to decline even when they came from the president’s friends, colleagues, acquaintances, neighbors, and others met over a distinguished career as a Civil War historian. This was the subcategory: Thanks to her specialty, we were building up a large collection of Civil War books, galleys and unpublished manuscripts — not just professional monographs, but amateurish family or local histories. These soon filled the overflow space in Massachusetts Hall, where water leaking from the roof during the unusual March rainstorms resulted in our having to discard several.

For everyone who sent us a book, the signed note back from the president mattered more than the book itself; both sides presumably understood that the president could buy or obtain any book she actually needed. The replies were signed by her — no auto-pen — which meant that even if she didn’t quite read your book the president still held it in her hands even for a moment, perhaps scribbling something at the bottom of her note with a fine black pen, or crossing out the “Ms” or “Professor” heading and substituting the author’s first name.

I had all kinds of plans for these books. The inscribed books we had to keep, of course, no matter how dire or dreadful. (The archives would want its pick of them anyway, deciding which books would become keepsakes of this particular era at Harvard.) But the many good titles that remained could go to struggling small foreign universities or schools, to our soldiers and Marines overseas, or to local libraries as an act of goodwill from a powerful and oft-maligned neighbor. They could go to the Allston branch of the Boston Public Library, for instance, perhaps to be dubbed “the president’s collection,” with its own shelving but freely available to Allstonians to read or borrow.

None of these ideas came to fruition. All of them would have required me to rise to a realm where I was no longer in charge — indeed, where I didn’t have a foothold. I would have to call meetings, bring bigger players to the table. Harvard’s top bureaucracy is actually quite small, and most of it was, literally, in my immediate presence: Two doors to the left was one vice president, two doors to the right, around a tight corner, was another. But these were big-gesture folks alongside a resolutely small-gesture one (me), and without an intermediary to help build support for my ideas my books weren’t going anywhere except, once, into a cardboard box outside my office just before Christmas, where I encouraged staff to help themselves and perhaps two dozen books, or half what I started the box with, went out that way.

In all this, the important thing was that books were objects to be honored, not treated as tiresome throwaways, and that everyone in the building knew this. Books are how, traditionally, universities are built: John Harvard was not the founder of Harvard University but a clergyman who, two years after its founding, bequeathed it his library. I used to joke that the most boring book in our collection was the volume called the “Prince Takamado Trophy All Japan Inter-Middle School English Oratorical Contest,” but if I hear it isn’t still on a shelf somewhere in Mass Hall 20 years from now, I won’t be the only one who’s disappointed.

Eric Weinberger has reviewed books in the Globe since 2000, and taught writing at Harvard for 10 years.


Full article:

In China’s Orbit

After 500 years of Western predominance, Niall Ferguson argues, the world is tilting back to the East.

“We are the masters now.” I wonder if President Barack Obama saw those words in the thought bubble over the head of his Chinese counterpart, Hu Jintao, at the G20 summit in Seoul last week. If the president was hoping for change he could believe in—in China’s currency policy, that is—all he got was small change. Maybe Treasury Secretary Timothy Geithner also heard “We are the masters now” as the Chinese shot down his proposal for capping imbalances in global current accounts. Federal Reserve Chairman Ben Bernanke got the same treatment when he announced a new round of “quantitative easing” to try to jump start the U.S. economy, a move described by one leading Chinese commentator as “uncontrolled” and “irresponsible.”

“We are the masters now.” That was certainly the refrain that I kept hearing in my head when I was in China two weeks ago. It wasn’t so much the glitzy, Olympic-quality party I attended in the Tai Miao Temple, next to the Forbidden City, that made this impression. The displays of bell ringing, martial arts and all-girl drumming are the kind of thing that Western visitors expect. It was the understated but unmistakable self-confidence of the economists I met that told me something had changed in relations between China and the West.

One of them, Cheng Siwei, explained over dinner China’s plan to become a leader in green energy technology. Between swigs of rice wine, Xia Bin, an adviser to the People’s Bank of China, outlined the need for a thorough privatization program, “including even the Great Hall of the People.” And in faultless English, David Li of Tsinghua University confessed his dissatisfaction with the quality of Chinese Ph.D.s.

You could not ask for smarter people with whom to discuss the two most interesting questions in economic history today: Why did the West come to dominate not only China but the rest of the world in the five centuries after the Forbidden City was built? And is that period of Western dominance now finally coming to an end?

In a brilliant paper that has yet to be published in English, Mr. Li and his co-author Guan Hanhui demolish the fashionable view that China was economically neck-and-neck with the West until as recently as 1800. Per capita gross domestic product, they show, stagnated in the Ming era (1402-1626) and was significantly lower than that of pre-industrial Britain. China still had an overwhelmingly agricultural economy, with low-productivity cultivation accounting for 90% of GDP. And for a century after 1520, the Chinese national savings rate was actually negative. There was no capital accumulation in late Ming China; rather the opposite.

The story of what Kenneth Pomeranz, a history professor at the University of California, Irvine, has called “the Great Divergence” between East and West began much earlier. Even the late economist Angus Maddison may have been over-optimistic when he argued that in 1700 the average inhabitant of China was probably slightly better off than the average inhabitant of the future United States. Mr. Maddison was closer to the mark when he estimated that, in 1600, per capita GDP in Britain was already 60% higher than in China.

For the next several hundred years, China continued to stagnate and, in the 20th century, even to retreat, while the English-speaking world, closely followed by northwestern Europe, surged ahead. By 1820 U.S. per capita GDP was twice that of China; by 1870 it was nearly five times greater; by 1913 the ratio was nearly 10 to one.

Despite the painful interruption of the Great Depression, the U.S. suffered nothing so devastating as China’s wretched mid-20th century ordeal of revolution, civil war, Japanese invasion, more revolution, man-made famine and yet more (“cultural”) revolution. In 1968 the average American was 33 times richer than the average Chinese, using figures calculated on the basis of purchasing power parity (allowing for the different costs of living in the two countries). Calculated in current dollar terms, the differential at its peak was more like 70 to 1.

This was the ultimate global imbalance, the result of centuries of economic and political divergence. How did it come about? And is it over?

As I’ve researched my forthcoming book over the past two years, I’ve concluded that the West developed six “killer applications” that “the Rest” lacked. These were:

• Competition: Europe was politically fragmented, and within each monarchy or republic there were multiple competing corporate entities.

• The Scientific Revolution: All the major 17th-century breakthroughs in mathematics, astronomy, physics, chemistry and biology happened in Western Europe.

• The rule of law and representative government: This optimal system of social and political order emerged in the English-speaking world, based on property rights and the representation of property owners in elected legislatures.

• Modern medicine: All the major 19th- and 20th-century advances in health care, including the control of tropical diseases, were made by Western Europeans and North Americans.

• The consumer society: The Industrial Revolution took place where there was both a supply of productivity-enhancing technologies and a demand for more, better and cheaper goods, beginning with cotton garments.

• The work ethic: Westerners were the first people in the world to combine more extensive and intensive labor with higher savings rates, permitting sustained capital accumulation.

Those six killer apps were the key to Western ascendancy. The story of our time, which can be traced back to the reign of the Meiji Emperor in Japan (1867-1912), is that the Rest finally began to download them. It was far from a smooth process. The Japanese had no idea which elements of Western culture were the crucial ones, so they ended up copying everything, from Western clothes and hairstyles to the practice of colonizing foreign peoples. Unfortunately, they took up empire-building at precisely the moment when the costs of imperialism began to exceed the benefits. Other Asian powers—notably India—wasted decades on the erroneous premise that the socialist institutions pioneered in the Soviet Union were superior to the market-based institutions of the West.

Beginning in the 1950s, however, a growing band of East Asian countries followed Japan in mimicking the West’s industrial model, beginning with textiles and steel and moving up the value chain from there. The downloading of Western applications was now more selective. Competition and representative government did not figure much in Asian development, which instead focused on science, medicine, the consumer society and the work ethic (less Protestant than Max Weber had thought). Today Singapore is ranked third in the World Economic Forum’s assessment of competitiveness. Hong Kong is 11th, followed by Taiwan (13th), South Korea (22nd) and China (27th). This is roughly the order, historically, in which these countries Westernized their economies.

Today per capita GDP in China is 19% that of the U.S., compared with 4% when economic reform began just over 30 years ago. Hong Kong, Japan and Singapore were already there as early as 1950; Taiwan got there in 1970, and South Korea got there in 1975. According to the Conference Board, Singapore’s per capita GDP is now 21% higher than that of the U.S., Hong Kong’s is about the same, Japan’s and Taiwan’s are about 25% lower, and South Korea’s 36% lower. Only a foolhardy man would bet against China’s following the same trajectory in the decades ahead.

China’s has been the biggest and fastest of all the industrialization revolutions. In the space of 26 years, China’s GDP grew by a factor of 10. It took the U.K. 70 years after 1830 to grow by a factor of four. According to the International Monetary Fund, China’s share of global GDP (measured in current prices) will pass the 10% mark in 2013. Goldman Sachs continues to forecast that China will overtake the U.S. in terms of GDP in 2027, just as it recently overtook Japan.

But in some ways the Asian century has already arrived. China is on the brink of surpassing the American share of global manufacturing, having overtaken Germany and Japan in the past 10 years. China’s biggest city, Shanghai, already sits atop the ranks of the world’s megacities, with Mumbai right behind; no American city comes close.

Nothing is more certain to accelerate the shift of global economic power from West to East than the looming U.S. fiscal crisis. With a debt-to-revenue ratio of 312%, Greece is in dire straits already. But the debt-to-revenue ratio of the U.S. is 358%, according to Morgan Stanley. The Congressional Budget Office estimates that interest payments on the federal debt will rise from 9% of federal tax revenues to 20% in 2020, 36% in 2030 and 58% in 2040. Only America’s “exorbitant privilege” of being able to print the world’s premier reserve currency gives it breathing space. Yet this very privilege is under mounting attack from the Chinese government.

For many commentators, the resumption of quantitative easing by the Federal Reserve has appeared to spark a currency war between the U.S. and China. If the “Chinese don’t take actions” to end the manipulation of their currency, President Obama declared in New York in September, “we have other means of protecting U.S. interests.” The Chinese premier Wen Jiabao was quick to respond: “Do not work to pressure us on the renminbi rate…. Many of our exporting companies would have to close down, migrant workers would have to return to their villages. If China saw social and economic turbulence, then it would be a disaster for the world.”

Such exchanges are a form of pi ying xi, China’s traditional shadow puppet theater. In reality, today’s currency war is between “Chimerica”—as I’ve called the united economies of China and America—and the rest of the world. If the U.S. prints money while China effectively still pegs its currency to the dollar, both parties benefit. The losers are countries like Indonesia and Brazil, whose real trade-weighted exchange rates have appreciated since January 2008 by 18% and 17%, respectively.

But who now gains more from this partnership? With China’s output currently 20% above its pre-crisis level and that of the U.S. still 2% below, the answer seems clear. American policy-makers may utter the mantra that “they need us as much as we need them” and refer ominously to Lawrence Summers’s famous phrase about “mutually assured financial destruction.” But the Chinese already have a plan to reduce their dependence on dollar reserve accumulation and subsidized exports. It is a strategy not so much for world domination on the model of Western imperialism as for reestablishing China as the Middle Kingdom—the dominant tributary state in the Asia-Pacific region.

If I had to summarize China’s new grand strategy, I would do it, Chinese-style, as the Four “Mores”: Consume more, import more, invest abroad more and innovate more. In each case, a change of economic strategy pays a handsome geopolitical dividend.

By consuming more, China can reduce its trade surplus and, in the process, endear itself to its major trading partners, especially the other emerging markets. China recently overtook the U.S. as the world’s biggest automobile market (14 million sales a year, compared to 11 million), and its demand is projected to rise tenfold in the years ahead.

By 2035, according to the International Energy Agency, China will be using a fifth of all global energy, a 75% increase since 2008. It accounted for about 46% of global coal consumption in 2009, the World Coal Institute estimates, and consumes a similar share of the world’s aluminum, copper, nickel and zinc production. Last year China used twice as much crude steel as the European Union, United States and Japan combined.

Such figures translate into major gains for the exporters of these and other commodities. China is already Australia’s biggest export market, accounting for 22% of Australian exports in 2009. It buys 12% of Brazil’s exports and 10% of South Africa’s. It has also become a big purchaser of high-end manufactured goods from Japan and Germany. Once China was mainly an exporter of low-price manufactures. Now that it accounts for fully a fifth of global growth, it has become the most dynamic new market for other people’s stuff. And that wins friends.

The Chinese are justifiably nervous, however, about the vagaries of world commodity prices. How could they feel otherwise after the huge price swings of the past few years? So it makes sense for them to invest abroad more. In January 2010 alone, the Chinese made direct investments worth a total of $2.4 billion in 420 overseas enterprises in 75 countries and regions. The overwhelming majority of these were in Asia and Africa. The biggest sectors were mining, transportation and petrochemicals. Across Africa, the Chinese mode of operation is now well established. Typical deals exchange highway and other infrastructure investments for long leases of mines or agricultural land, with no questions asked about human rights abuses or political corruption.

Growing overseas investment in natural resources not only makes sense as a diversification strategy to reduce China’s exposure to the risk of dollar depreciation. It also allows China to increase its financial power, not least through its vast and influential sovereign wealth fund. And it justifies ambitious plans for naval expansion. In the words of Rear Admiral Zhang Huachen, deputy commander of the East Sea Fleet: “With the expansion of the country’s economic interests, the navy wants to better protect the country’s transportation routes and the safety of our major sea-lanes.” The South China Sea has already been declared a “core national interest,” and deep-water ports are projected in Pakistan, Burma and Sri Lanka.

Finally, and contrary to the view that China is condemned to remain an assembly line for products “designed in California,” the country is innovating more, aiming to become, for example, the world’s leading manufacturer of wind turbines and photovoltaic panels. In 2007 China overtook Germany in terms of new patent applications. This is part of a wider story of Eastern ascendancy. In 2008, for the first time, the number of patent applications from China, India, Japan and South Korea exceeded those from the West.

The dilemma posed to the “departing” power by the “arriving” power is always agonizing. The cost of resisting Germany’s rise was heavy indeed for Britain; it was much easier to slide quietly into the role of junior partner to the U.S. Should America seek to contain China or to accommodate it? Opinion polls suggest that ordinary Americans are no more certain how to respond than the president. In a recent survey by the Pew Research Center, 49% of respondents said they did not expect China to “overtake the U.S. as the world’s main superpower,” but 46% took the opposite view.

Coming to terms with a new global order was hard enough after the collapse of the Soviet Union, which went to the heads of many Western commentators. (Who now remembers talk of American hyperpuissance without a wince?) But the Cold War lasted little more than four decades, and the Soviet Union never came close to overtaking the U.S. economically. What we are living through now is the end of 500 years of Western predominance. This time the Eastern challenger is for real, both economically and geopolitically.

The gentlemen in Beijing may not be the masters just yet. But one thing is certain: They are no longer the apprentices.

Niall Ferguson is a professor of history at Harvard University and a professor of business administration at the Harvard Business School. His next book, “Civilization: The West and the Rest,” will be published in March.


Full article and photos:

The God-Science Shouting Match: A Response

In reading the nearly 700 reader responses to my Oct. 17 essay for The Stone, (“Morals Without God?“) I notice how many readers are relieved to see that there are shades of gray when it comes to the question whether morality requires God. I believe that such a discussion needs to revolve around both the distant past, in which religion likely played little or no role if we go back far enough, and modern times, in which it is hard to disentangle morality and religion. The latter point seemed obvious to me, yet proved controversial. Even though 90 percent of my text questions the religious origins of human morality, and wonders if we need a God to be good, it is the other 10 percent — in which I tentatively assign a role to religion — that drew most ire. Atheists, it seems (at least those who responded here) don’t like any less than 100 percent agreement with their position.

To have a productive debate, religion needs to recognize the power of the scientific method and the truths it has revealed, but its opponents need to recognize that one cannot simply dismiss a social phenomenon found in every major society. If humans are inherently religious, or at least show rituals related to the supernatural, there is a big question to be answered. The issue is not whether or not God exists — which I find to be a monumentally uninteresting question defined, as it is, by the narrow parameters of monotheism — but why humans universally feel the need for supernatural entities. Is this just to stay socially connected or does it also underpin morality? And if so, what will happen to morality in its absence?

Just raising such an obvious issue has become controversial in an atmosphere in which public forums seem to consist of pro-science partisans or pro-religion partisans, and nothing in between. How did we arrive at this level of polarization, this small-mindedness, as if we are taking part in the Oxford Debating Society, where all that matters is winning or losing? It is unfortunate when, in discussing how to lead our lives and why to be good — very personal questions — we end up with a shouting match. There are in fact no answers to these questions, only approximations, and while science may be an excellent source of information it is simply not designed to offer any inspiration in this regard. It used to be that science and religion went together, and in fact (as I tried to illustrate with Bosch’s paintings) Western science ripened in the bosom of Christianity and its explicit desire for truth. Ironically, even atheism may be looked at as a product of this desire, as explained by the philosopher John Gray:

Christianity struck at the root of pagan tolerance of illusion. In claiming that there is only one true faith, it gave truth a supreme value it had not had before. It also made disbelief in the divine possible for the first time. The long-delayed consequence of the Christian faith was an idolatry of truth that found its most complete expression in atheism. (Straw Dogs, 2002).

Those who wish to remove religion and define morality as the pursuit of scientifically defined well-being (à la Sam Harris) should read up on earlier attempts in this regard, such as the Utopian novel “Walden Two” by B. F. Skinner, who thought that humans could achieve greater happiness and productivity if they just paid better attention to the science of reward and punishment. Skinner’s colleague John Watson even envisioned “baby factories” that would dispense with the “mawkish” emotions humans are prone to, an idea applied with disastrous consequences in Romanian orphanages. And talking of Romania, was not the entire Communist experiment an attempt at a society without God? Apart from the question of how moral these societies turned out to be, I find it intriguing that over time Communism began to look more and more like a religion itself. The singing, marching, reciting of poems and pledges and waving in the air of Little Red Books smacked of holy fervor, hence my remark that any movement that tries to promote a certain moral agenda — even while denying God — will soon look like any old religion. Since people look up to those perceived as more knowledgeable, anyone who wants to promote a certain social agenda, even one based on science, will inevitably come face to face with the human tendency to follow leaders and let them do the thinking.

What I would love to see is a debate among moderates. Perhaps it is an illusion that this can be achieved on the Internet, given how it magnifies disagreements, but I do think that most people will be open to a debate that respects both the beliefs held by many and the triumphs of science. There is no obligation for non-religious people to hate religion, and many believers are open to interrogating their own convictions. If the radicals on both ends are unable to talk with each other, this should not keep the rest of us from doing so.

Frans B. M. de Waal is a biologist interested in primate behavior. He is C. H. Candler Professor in Psychology, and Director of the Living Links Center at the Yerkes National Primate Research Center at Emory University, in Atlanta, and a member of the National Academy of Sciences and the Royal Dutch Academy of Sciences. His latest book is “The Age of Empathy.”


Full article:

Stories vs. Statistics

Half a century ago the British scientist and novelist C. P. Snow bemoaned the estrangement of what he termed the “two cultures” in modern society — the literary and the scientific. These days, there is some reason to celebrate better communication between these domains, if only because of the increasingly visible salience of scientific ideas. Still a gap remains, and so I’d like here to take an oblique look at a few lesser-known contrasts and divisions between subdomains of the two cultures, specifically those between stories and statistics.

I’ll begin by noting that the notions of probability and statistics are not alien to storytelling. From the earliest of recorded histories there were glimmerings of these concepts, which were reflected in everyday words and stories. Consider the notions of central tendency — average, median, mode, to name a few. They most certainly grew out of workaday activities and led to words such as (in English) “usual,” “typical.” “customary,” “most,” “standard,” “expected,” “normal,” “ordinary,” “medium,” “commonplace,” “so-so,” and so on. The same is true about the notions of statistical variation — standard deviation, variance, and the like. Words such as “unusual,” “peculiar,” “strange,” “original,” “extreme,” “special,” “unlike,” “deviant,” “dissimilar” and “different” come to mind. It is hard to imagine even prehistoric humans not possessing some sort of rudimentary idea of the typical or of the unusual. Any situation or entity — storms, animals, rocks — that recurred again and again would, it seems, lead naturally to these notions. These and other fundamentally scientific concepts have in one way or another been embedded in the very idea of what a story is — an event distinctive enough to merit retelling — from cave paintings to “Gilgamesh” to “The Canterbury Tales,” onward.

The idea of probability itself is present in such words as “chance,” “likelihood,” “fate,” “odds,” “gods,” “fortune,” “luck,” “happenstance,” “random,” and many others. A mere acceptance of the idea of alternative possibilities almost entails some notion of probability, since some alternatives will be come to be judged more likely than others. Likewise, the idea of sampling is implicit in words like “instance,” “case,” “example,” “cross-section,” “specimen” and “swatch,” and that of correlation is reflected in “connection,” “relation,” “linkage,” “conjunction,” “dependence” and the ever too ready “cause.” Even hypothesis testing and Bayesian analysis possess linguistic echoes in common phrases and ideas that are an integral part of human cognition and storytelling. With regard to informal statistics we’re a bit like Moliere’s character who was shocked to find that he’d been speaking prose his whole life.

Despite the naturalness of these notions, however, there is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled. A drily named distinction from formal statistics is relevant: we’re said to commit a Type I error when we observe something that is not really there and a Type II error when we fail to observe something that is there. There is no way to always avoid both types, and we have different error thresholds in different endeavors, but the type of error people feel more comfortable may be telling. It gives some indication of their intellectual personality type, on which side of the two cultures (or maybe two coutures) divide they’re most comfortable.

People who love to be entertained and beguiled or who particularly wish to avoid making a Type II error might be more apt to prefer stories to statistics. Those who don’t particularly like being entertained or beguiled or who fear the prospect of making a Type I error might be more apt to prefer statistics to stories. The distinction is not unrelated to that between those (61.389% of us) who view numbers in a story as providing rhetorical decoration and those who view them as providing clarifying information.

The so-called “conjunction fallacy” suggests another difference between stories and statistics. After reading a novel, it can sometimes seem odd to say that the characters in it don’t exist. The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true. Congressman Smith is known to be cash-strapped and lecherous. Which is more likely? Smith took a bribe from a lobbyist or Smith took a bribe from a lobbyist, has taken money before, and spends it on luxurious “fact-finding” trips with various pretty young interns. Despite the coherent story the second alternative begins to flesh out, the first alternative is more likely. For any statements, A, B, and C, the probability of A is always greater than the probability of A, B, and C together since whenever A, B, and C all occur, A occurs, but not vice versa.

This is one of many cognitive foibles that reside in the nebulous area bordering mathematics, psychology and storytelling. In the classic illustration of the fallacy put forward by Amos Tversky and Daniel Kahneman, a woman named Linda is described. She is single, in her early 30s, outspoken, and exceedingly smart. A philosophy major in college, she has devoted herself to issues such as nuclear non-proliferation. So which of the following is more likely?

a.) Linda is a bank teller.

b.) Linda is a bank teller and is active in the feminist movement.

Although most people choose b.), this option is less likely since two conditions must be met in order for it to be satisfied, whereas only one of them is required for option a.) to be satisfied.

(Incidentally, the conjunction fallacy is especially relevant to religious texts. Imbedding the God character in a holy book’s very detailed narrative and building an entire culture around this narrative seems by itself to confer a kind of existence on Him.)

Yet another contrast between informal stories and formal statistics stems from the extensional/intensional distinction. Standard scientific and mathematical logic is termed extensional since objects and sets are determined by their extensions, which is to say by their member(s). Mathematical entities having the same members are the same even if they are referred to differently. Thus, in formal mathematical contexts, the number 3 can always be substituted for, or interchanged with, the square root of 9 or the largest whole number smaller than pi without affecting the truth of the statement in which it appears.

In everyday intensional (with an s) logic, things aren’t so simple since such substitution isn’t always possible. Lois Lane knows that Superman can fly, but even though Superman and Clark Kent are the same person, she doesn’t know that Clark Kent can fly. Likewise, someone may believe that Oslo is in Sweden, but even though Oslo is the capital of Norway, that person will likely not believe that the capital of Norway is in Sweden. Locutions such as “believes that” or “thinks that” are generally intensional and do not allow substitution of equals for equals.

The relevance of this to probability and statistics? Since they’re disciplines of pure mathematics, their appropriate logic is the standard extensional logic of proof and computation. But for applications of probability and statistics, which are what most people mean when they refer to them, the appropriate logic is informal and intensional. The reason is that an event’s probability, or rather our judgment of its probability, is almost always affected by its intensional context.

Consider the two boys problem in probability. Given that a family has two children and that at least one of them is a boy, what is the probability that both children are boys? The most common solution notes that there are four equally likely possibilities — BB, BG, GB, GG, the order of the letters indicating birth order. Since we’re told that the family has at least one boy, the GG possibility is eliminated and only one of the remaining three equally likely possibilities is a family with two boys. Thus the probability of two boys in the family is 1/3. But how do we come to think that, learn that, believe that the family has at least one boy? What if instead of being told that the family has at least one boy, we meet the parents who introduce us to their son? Then there are only two equally like possibilities — the other child is a girl or the other child is a boy, and so the probability of two boys is 1/2.

Many probability problems and statistical surveys are sensitive to their intensional contexts (the phrasing and ordering of questions, for example). Consider this relatively new variant of the two boys problem. A couple has two children and we’re told that at least one of them is a boy born on a Tuesday. What is the probability the couple has two boys? Believe it or not, the Tuesday is important, and the answer is 13/27. If we discover the Tuesday birth in slightly different intensional contexts, however, the answer could be 1/3 or 1/2.

Of course, the contrasts between stories and statistics don’t end here. Another example is the role of coincidences, which loom large in narratives, where they too frequently are invested with a significance that they don’t warrant probabilistically. The birthday paradox, small world links between people, psychics’ vaguely correct pronouncements, the sports pundit Paul the Octopus, and the various bible codes are all examples. In fact, if one considers any sufficiently large data set, such meaningless coincidences will naturally arise: the best predictor of the value of the S&P 500 stock index in the early 1990s was butter production in Bangladesh. Or examine the first letters of the months or of the planets: JFMAMJ-JASON-D or MVEMJ-SUN-P. Are JASON and SUN significant? Of course not. As I’ve written often, the most amazing coincidence of all would be the complete absence of all coincidences.

I’ll close with perhaps the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.

In the end, whether we resonate viscerally to King Lear’s predicament in dividing his realm among his three daughters or can’t help thinking of various mathematical apportionment ideas that may have helped him clarify his situation is probably beyond calculation. At different times and places most of us can, should, and do respond in both ways.

John Allen Paulos is Professor of Mathematics at Temple University and the author of several books, including “Innumeracy,” “Once Upon a Number,” and, most recently, “Irreligion.”


Full article and photo:

The Cold-Weather Counterculture Comes to an End

Not so long ago, any young man who was so inclined could ski all winter in the mountains of Colorado or Utah on a pauper’s budget. The earnings from a part-time job cleaning toilets or washing dishes were enough to keep him gliding down the mountain by day and buzzing on cheap booze by night, during that glorious adrenaline come-down that these days often involves an expensive hot-stone massage and is unashamedly referred to as “après-ski.”

He had a pretty good run, the American ski bum, but Jeremy Evans’ “In Search of Powder” suggests that the American West’s cold-weather counterculture is pretty much cashed. From Vail to Sun Valley, corporate-owned ski resorts have driven out family-run facilities, and America’s young college grads have mostly ceded control of the lift lines to students from south of the equator on summer break.

Mr. Evans, a newspaper reporter who himself “ignored the next logical step in adult life” to live in snowy Lake Tahoe, identifies with the graying powder hounds that fill his pages, and for the most part he shares their nostalgia for the way things used to be.

During the 1960s and 1970s, in alpine enclaves like Park City, Utah, and Aspen, Colo., hippies squatted in old miner’s shacks and clashed with rednecks. In Tahoe, Bay Area youths developed the liberated style of skiing known as “hot-dogging,” while in Jackson Hole, Wyo., stylish Europeans such as Jean-Claude Killy and Pepi Stiegler went one step further, inspiring generations of American youth to become daredevils on skis—and, eventually, to get paid for it.

Whether these ski bums intended to or not, they helped popularize the sport and make it profitable. What followed was reminiscent of urban gentrification. As the second-home owners took over, property prices outpaced local wages. Today’s would-be ski bum faces prohibitive commutes, and immigrant workers have taken over the sorts of menial jobs that carefree skier types once happily performed.

Skiing and snowboarding aren’t even a ski resort’s main attraction anymore. Rather, they are marketing tools used to boost real-estate sales and entice tourists who would just as soon go on a cruise or take a trip to Las Vegas. Four corporations—Vail Resorts, Booth Creek, Intrawest and American Skiing Co.—run most of the big mountains. Even Telluride, once considered remote and wild, plays host to Oprah Winfrey, Tom Cruise and a parade of summer festivals.

In 2002, Hal Clifford took the corporate ski industry to task in “Downhill Slide.” Mr. Evans’s book incorporates Mr. Clifford’s most salient findings, but his oral-history method limits his view of the topic. Mr. Evans would have done well, in particular, to mine the obvious connections between ski and surf culture. (Dick Barrymore’s landmark 1969 film, “Last of the Ski Bums,” was more or less an alpine remake of the classic 1966 surfer documentary “The Endless Summer.”)

Like surfing, skiing first went from sport to lifestyle during the 1960s and thus came of age with the baby boomers. They made skiing sexy and rebellious, and then they made it a big business. Two of America’s most exclusive mountain resorts, Beaver Creek (in Colorado) and Deer Valley (in Utah), opened around 1980—right when baby boomers hit a sweet spot in terms of athleticism and net worth. Now, as they age, golf courses and luxury spas have become de rigueur in big ski towns.

Another boomer legacy is the entire notion that the old-fashioned ski bum was a blessed soul who somehow belonged to the natural order. More likely, his was just a brief, Shangri-La moment. For ski towns in the West, the real problem is what will happen as the prosperous, once free-spirited baby-boomer generation begins to wane.

Mr. Hartman contributes to and


Full article and photo: