Chilean President Wrote ‘Deutschland Über Alles’ in German Guest Book

Diplomatic Gaffe

“Deutschland Über Alles:” Chilean President Sebastian Pinera wrote his controversial dedication into the official guest book of German President Christian Wulff (left).

In a gesture of thanks for Germany’s help in rescuing the 33 Chilean miners, President Sebastián Piñera wrote the historically charged slogan ‘Deutschland Über Alles’ into the guest book of German President Christian Wulff last week. Now Wulff’s office is pondering how to remove the words.

Chilean President Sebastián Piñera has apologized for writing the words “Deutschland Über Alles,” a phrase frowned on in Germany because of its association with the Nazi era, into the official guest book of German President Christian Wulff during a visit to Berlin last week.

Media reports claimed Piñera had said on Monday that he had learned the slogan in school in the 1950s and 1960s and understood it to be a celebration of German unification in the 19th century under Chancellor Otto von Bismarck. He said he was unaware that it was “linked to that country’s dark past.”

The first verse was dropped from the anthem after World War II because it is deemed too nationalistic. Piñera had been on a European trip to thank countries for their help in freeing the 33 Chilean miners. A spokesman for Wulff’s office played down the gaffe on Monday, saying the president had no doubt intended to express something positive about Germany.

Bild’s Loser of the Day

Piñera isn’t the only one to have unwittingly broken the taboo. Even experienced Europeans have done so. Last year, the French presidential office was so excited at the prospect that Chancellor Angela Merkel would attend the official celebrations to mark the French victory in World War I, the first German leader ever to do so, that its press department announced that the choir of the French army would sing “Deutschland Über Alles” at the event, the Frankfurter Allgemeine Zeitung newspaper reported at the time.

The mistake was spotted in time and the choir confined itself to singing the third verse which has been officially used since the end of World War II, starting with the unoffensive words: “Unity and justice and freedom for the German fatherland!”

Bild, Germany’s best-selling tabloid newspaper, responded to the faux pas by declaring Piñera as its loser of the day, a regular item on its front page, on Tuesday. “He’s better at rescuing miners,” the paper declared.

Meanwhile, “Deutschland Über Alles” continues to sully the pages of Wulff’s guest book. Wulff’s office now plans to discuss the matter with the Chilean embassy in Berlin. Piñera may get a chance to revise his entry.


Full article and photo:,1518,725382,00.html

Stories vs. Statistics

Half a century ago the British scientist and novelist C. P. Snow bemoaned the estrangement of what he termed the “two cultures” in modern society — the literary and the scientific. These days, there is some reason to celebrate better communication between these domains, if only because of the increasingly visible salience of scientific ideas. Still a gap remains, and so I’d like here to take an oblique look at a few lesser-known contrasts and divisions between subdomains of the two cultures, specifically those between stories and statistics.

I’ll begin by noting that the notions of probability and statistics are not alien to storytelling. From the earliest of recorded histories there were glimmerings of these concepts, which were reflected in everyday words and stories. Consider the notions of central tendency — average, median, mode, to name a few. They most certainly grew out of workaday activities and led to words such as (in English) “usual,” “typical.” “customary,” “most,” “standard,” “expected,” “normal,” “ordinary,” “medium,” “commonplace,” “so-so,” and so on. The same is true about the notions of statistical variation — standard deviation, variance, and the like. Words such as “unusual,” “peculiar,” “strange,” “original,” “extreme,” “special,” “unlike,” “deviant,” “dissimilar” and “different” come to mind. It is hard to imagine even prehistoric humans not possessing some sort of rudimentary idea of the typical or of the unusual. Any situation or entity — storms, animals, rocks — that recurred again and again would, it seems, lead naturally to these notions. These and other fundamentally scientific concepts have in one way or another been embedded in the very idea of what a story is — an event distinctive enough to merit retelling — from cave paintings to “Gilgamesh” to “The Canterbury Tales,” onward. 

The idea of probability itself is present in such words as “chance,” “likelihood,” “fate,” “odds,” “gods,” “fortune,” “luck,” “happenstance,” “random,” and many others. A mere acceptance of the idea of alternative possibilities almost entails some notion of probability, since some alternatives will be come to be judged more likely than others. Likewise, the idea of sampling is implicit in words like “instance,” “case,” “example,” “cross-section,” “specimen” and “swatch,” and that of correlation is reflected in “connection,” “relation,” “linkage,” “conjunction,” “dependence” and the ever too ready “cause.” Even hypothesis testing and Bayesian analysis possess linguistic echoes in common phrases and ideas that are an integral part of human cognition and storytelling. With regard to informal statistics we’re a bit like Moliere’s character who was shocked to find that he’d been speaking prose his whole life.

Despite the naturalness of these notions, however, there is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled. A drily named distinction from formal statistics is relevant: we’re said to commit a Type I error when we observe something that is not really there and a Type II error when we fail to observe something that is there. There is no way to always avoid both types, and we have different error thresholds in different endeavors, but the type of error people feel more comfortable may be telling. It gives some indication of their intellectual personality type, on which side of the two cultures (or maybe two coutures) divide they’re most comfortable.

People who love to be entertained and beguiled or who particularly wish to avoid making a Type II error might be more apt to prefer stories to statistics. Those who don’t particularly like being entertained or beguiled or who fear the prospect of making a Type I error might be more apt to prefer statistics to stories. The distinction is not unrelated to that between those (61.389% of us) who view numbers in a story as providing rhetorical decoration and those who view them as providing clarifying information.

The so-called “conjunction fallacy” suggests another difference between stories and statistics. After reading a novel, it can sometimes seem odd to say that the characters in it don’t exist. The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true. Congressman Smith is known to be cash-strapped and lecherous. Which is more likely? Smith took a bribe from a lobbyist or Smith took a bribe from a lobbyist, has taken money before, and spends it on luxurious “fact-finding” trips with various pretty young interns. Despite the coherent story the second alternative begins to flesh out, the first alternative is more likely. For any statements, A, B, and C, the probability of A is always greater than the probability of A, B, and C together since whenever A, B, and C all occur, A occurs, but not vice versa.

This is one of many cognitive foibles that reside in the nebulous area bordering mathematics, psychology and storytelling. In the classic illustration of the fallacy put forward by Amos Tversky and Daniel Kahneman, a woman named Linda is described. She is single, in her early 30s, outspoken, and exceedingly smart. A philosophy major in college, she has devoted herself to issues such as nuclear non-proliferation. So which of the following is more likely?

a.) Linda is a bank teller.

b.) Linda is a bank teller and is active in the feminist movement.

Although most people choose b.), this option is less likely since two conditions must be met in order for it to be satisfied, whereas only one of them is required for option a.) to be satisfied.

(Incidentally, the conjunction fallacy is especially relevant to religious texts. Imbedding the God character in a holy book’s very detailed narrative and building an entire culture around this narrative seems by itself to confer a kind of existence on Him.)

Yet another contrast between informal stories and formal statistics stems from the extensional/intensional distinction. Standard scientific and mathematical logic is termed extensional since objects and sets are determined by their extensions, which is to say by their member(s). Mathematical entities having the same members are the same even if they are referred to differently. Thus, in formal mathematical contexts, the number 3 can always be substituted for, or interchanged with, the square root of 9 or the largest whole number smaller than pi without affecting the truth of the statement in which it appears.

In everyday intensional (with an s) logic, things aren’t so simple since such substitution isn’t always possible. Lois Lane knows that Superman can fly, but even though Superman and Clark Kent are the same person, she doesn’t know that Clark Kent can fly. Likewise, someone may believe that Oslo is in Sweden, but even though Oslo is the capital of Norway, that person will likely not believe that the capital of Norway is in Sweden. Locutions such as “believes that” or “thinks that” are generally intensional and do not allow substitution of equals for equals.

The relevance of this to probability and statistics? Since they’re disciplines of pure mathematics, their appropriate logic is the standard extensional logic of proof and computation. But for applications of probability and statistics, which are what most people mean when they refer to them, the appropriate logic is informal and intensional. The reason is that an event’s probability, or rather our judgment of its probability, is almost always affected by its intensional context. 

Consider the two boys problem in probability. Given that a family has two children and that at least one of them is a boy, what is the probability that both children are boys? The most common solution notes that there are four equally likely possibilities — BB, BG, GB, GG, the order of the letters indicating birth order. Since we’re told that the family has at least one boy, the GG possibility is eliminated and only one of the remaining three equally likely possibilities is a family with two boys. Thus the probability of two boys in the family is 1/3. But how do we come to think that, learn that, believe that the family has at least one boy? What if instead of being told that the family has at least one boy, we meet the parents who introduce us to their son? Then there are only two equally like possibilities — the other child is a girl or the other child is a boy, and so the probability of two boys is 1/2.

Many probability problems and statistical surveys are sensitive to their intensional contexts (the phrasing and ordering of questions, for example). Consider this relatively new variant of the two boys problem. A couple has two children and we’re told that at least one of them is a boy born on a Tuesday. What is the probability the couple has two boys? Believe it or not, the Tuesday is important, and the answer is 13/27. If we discover the Tuesday birth in slightly different intensional contexts, however, the answer could be 1/3 or 1/2.

Of course, the contrasts between stories and statistics don’t end here. Another example is the role of coincidences, which loom large in narratives, where they too frequently are invested with a significance that they don’t warrant probabilistically. The birthday paradox, small world links between people, psychics’ vaguely correct pronouncements, the sports pundit Paul the Octopus, and the various bible codes are all examples. In fact, if one considers any sufficiently large data set, such meaningless coincidences will naturally arise: the best predictor of the value of the S&P 500 stock index in the early 1990s was butter production in Bangladesh. Or examine the first letters of the months or of the planets: JFMAMJ-JASON-D or MVEMJ-SUN-P. Are JASON and SUN significant? Of course not. As I’ve written often, the most amazing coincidence of all would be the complete absence of all coincidences.

I’ll close with perhaps the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.

In the end, whether we resonate viscerally to King Lear’s predicament in dividing his realm among his three daughters or can’t help thinking of various mathematical apportionment ideas that may have helped him clarify his situation is probably beyond calculation. At different times and places most of us can, should, and do respond in both ways.

John Allen Paulos is Professor of Mathematics at Temple University and the author of several books, including “Innumeracy,” “Once Upon a Number,” and, most recently, “Irreligion.”


Full article and photo:


Attention passengers: It’s perfectly safe to use your cellphones

With more than 28,000 commercial flights in the skies over the United States every day, there are probably few sentences in the English language that are spoken more often and insistently than this: “Please turn off all electronic devices.”

Asking why passengers must turn off their mobile phones on airplanes seems like an odd question. Because! With a sentence said so often there simply must be a reason for it. Or — is there not?

Flight attendants are required to make their preflight safety announcement by the Federal Communications Commission because of “potential interference to the aircraft’s navigation and communication systems.” Perhaps this seems like a no-brainer: turning off your cellphone inside a piece of technology as sensitive as an airplane. In our civilized times, there are only a few things imaginable which more likely lead to direct physical conflict with the person in the seat next to you than turning on your cellphone during takeoff and nonchalantly calling your hairdresser to reschedule that appointment next Wednesday. In Great Britain, a 28-year-old oil worker was sentenced to 12 months in prison in 1999 for refusing to switch off his cellphone on a flight from Madrid to Manchester. He was convicted of “recklessly and negligently endangering” an aircraft.

Yet with people losing their freedom over the rule, it may come as a bit of a surprise that scientific studies have never actually proven a serious risk associated with the use of mobile phones on airplanes. In the late 1990s, when cellphones and mobile computers became mainstream, Boeing received reports from concerned pilots who had experienced system failures and suggested the problems may have been caused by laptops and phones the cabin crew had seen passengers using in-flight. Boeing actually bought the equipment from the passengers but was unable reproduce any of the problems, concluding it had “not been able to find a definite correlation between passenger-carried portable electronic devices and the associated reported airplane anomalies.”

The National Aeronautics and Space Administration released a study in 2003, stating that of eight tested cellphone models, none would be likely to interfere with navigation or radio systems of the aircraft — systems which are, of course, carefully shielded against all sources of natural or artificial radiation by design. Another study by the IEEE Electromagnetic Compatibility Society concluded in 2006 that “there is no definitive instance of an air accident known to have been caused by a passenger’s use of an electronic device.”

The same study also found that, on average, one to four calls are illegally made during every flight, meaning that there are tens of thousands of phone calls from American airplanes every day — and still no definitive evidence of a problem.

What makes the ban of mobile phones in the United States look even more odd is that it doesn’t exist in other parts of world. The European Aviation Safety Agency lifted the ban in 2007. “EASA does not ban the use of mobile phones on board as they are not considered to be a threat to safety,” says EASA spokesman Dominique Fouda. Several airlines like Ryanair and Emirates have since allowed passengers to use their phones during flights. According to EASA, some American airlines will soon allow the use of cellphones outside of US airspace.

While the safety argument sounds like a neat story every passenger would understand, there seems to be a second, more important reason for the ban. According to the Federal Aviation Agency, the current ban by the Federal Communications Commission has not been issued for security concerns, but “because of potential interference with ground networks,” says FAA spokeswoman Arlene Salac. An airplane with activated mobile phones flying over a city could cause these several hundred phones to simultaneously log into a base station on the ground, perhaps overloading it and threatening the network.

Europeans seem to not worry about this problem, since European airlines allowing cellphones install base stations inside each aircraft, forwarding all calls through the plane’s satellite system, charging passengers by the minute. If all phones are logged into the base station on the airplane, they will not cause trouble on the ground.

But even if the FCC were to revoke the ban, the FAA’s current regulations for the certification of electronic equipment would apply. This would mean air carriers would have to show that every particular cellphone model is compatible with every particular airplane type. With hundreds of cellphone models released every year, this would mean a continuing source of cost for airlines, while the only benefit would be the convenience of passengers.

In the end, the ban of mobile phones on airplanes might not be a story about safety concerns, but about the psychology of governmental agencies. Bureaucracy, in theory, is designed to eliminate irrationality by replacing the biased judgment of individuals with a system of fixed requirements. Bureaucracies are machines to make judgments according to the best objective knowledge available. Given that, and the suspicion that the threat by mobile phones is indeed minor, how is it possible that two bureaucratic agencies, the FAA and the FCC, act with disproportionate caution? Is the apparatus not so rational after all?

“The point of bureaucracy is to have a less emotive discussion. But that doesn’t mean you get rid of that factor,” says Daniel Carpenter, professor of government at Harvard University.

When it comes to the question of allowing people to use their mobile phones, the bureaucratic incentive to do so could not be weaker. For any agency involved in this, two errors are possible. The first is what Carpenter calls an error of commission: The agency allows mobile phones and something bad happens, either an airplane crash or a network failure on the ground. The other possible error is one of omission: The agency fails to allow the use of mobile phones, though they are safe, and people subsequently cannot make phone calls while on the airplane.

“One of these errors is much more vivid and evocative. The error of not letting people talk on cellphones when they should — it’s hard to see people dying from that,” says Carpenter.

This suggests the most important reason mobile phones are still banned on airplanes might be the absence of anger — the fact that passengers are not organizing and demanding the right to make calls.

Still, there might be yet another way of thinking about the issue. Despite the current ban, Congress debated the “Halting Airplane Noise to Give Us Peace Act” (also known as the “Hang Up Act”) in 2008, prohibiting all voice communications on commercial flights. The bill was never voted on, but the reasoning behind it was simple: No calls in airplanes, not because the calls are dangerous — but because they are so annoying.

Justus Bender is a reporter with Die Zeit, a weekly newspaper based in Hamburg, Germany.


Full article and photo:

The Seafarer

Rescue ship: Joshua Slocum (at left), his wife and sons Victor and Garfield aboard the Liberdade, the 35-foot ‘sailing canoe’ he built to get them home after they were shipwrecked on the coast of Brazil in 1888.

Joshua Slocum is remembered for two things—being the first person to sail single-handedly around the world and writing a marvelous account of the journey. In his biography of Slocum, “The Hard Way Around,” Geoffrey Wolff focuses less on the nautical and literary achievements than on what Slocum did before them.

It is, for the most part, not a pretty picture. The New York Times called Slocum a barbarian after he was imprisoned for allegedly mistreating a sailor. On one of the vessels he commanded, in the 1880s, several crewmen contracted smallpox, and Slocum was arrested again, this time for killing a mutinous member of the crew. Although he eventually resumed command of that ship, it then went aground and was lost in Brazil. By age 45, two of the ships Slocum commanded had been wrecked, his first wife and three of his children had died, and he was unemployed and broke.

I confess that, halfway into this tale of woe, I found myself thinking about bailing out. The early chapters seemed slow-moving, especially for anyone expecting an adventure story. There are also some odd change-ups in style, from carefully considered, grown-up prose to informal sentences such as this one: “It was a miracle the hulk didn’t sink, though if you wait a bit, she will.”

But Mr. Woolf’s writing was not my problem. I was troubled by his overall approach to his subject. Slocum’s solo circumnavigation—he set out from Boston in April 1895 and arrived back in Newport, R.I., in June 1898—was an extraordinary feat, and Slocum’s book about it all, “Sailing Alone Around the World” (1899), is an intoxicating masterpiece. I saw no purpose in exposing the great man’s failings more than a century after his death.

But I kept reading, propelled by Mr. Wolff’s engaging description of the life of a young seaman during the great age of sail. Slocum was 16 when he went to sea in 1860. He wanted to command one of the tall-masted clipper ships, and once he achieved his objective, 10 years later, he didn’t just chart the ship’s course and direct its crew. He also functioned as the resident entrepreneur, identifying cargos to carry and negotiating the terms. He called on exotic ports throughout the world, with his wife and children onboard most of the time.

But Slocum was born too late. The clipper-ship era is probably the most celebrated period of marine history—the inspiration for the paintings and prints that seem to hang everywhere, from stodgy clubs to fast-food restaurants. But it didn’t last long. In 1860, wood-hulled sailing vessels were already being displaced by steel ships powered by steam. By the time Slocum took over his most impressive ship, the 233-foot-long Northern Lights, in 1881, the tide was flowing swiftly against him.

It is in the attempt to connect Slocum’s circumstances and choices to his failures and his immortalizing achievements that Mr. Wolff finds book-worthy purpose. After Slocum lost his ship in Brazil in 1887, he built a 35-foot “sailing canoe” and set out on a 5,000-mile journey back to the U.S., this time with his second wife, Hettie (his first wife, Virginia, had died three years before), and two of his children. This is how Slocum, in his book, explained the switch to small-boat sailing: “The old boating trick came back fresh to me, the love of the thing itself gaining on me as the little ship stood out; and my crew with one voice said, ‘Go on.’ “

Not far into the journey, the little boat ran into a squall and the sails, which had been sewn by Hettie, shredded. Seeking to answer the question of what Slocum was thinking at such times, Mr. Wolff bores into Slocum’s prose like a literary detective. Of Slocum’s lifetime sailing obsession and his arresting phrase “the love of the thing itself” he writes that it came from “irreducible, hard-nut recognition and radiant sentiment.”

Mr. Wolff doesn’t get around to describing Slocum’s 46,000-mile lap around the planet until his book’s penultimate chapter. By then many readers will be so fascinated by the man and the why-did-he-do-it question that they may be eager to read Slocum’s own book, which has never gone out of print.

What is it that drives some people to undertake the audacious? We live at a time when many of the most important firsts have already been claimed, but people seem more obsessed than ever with establishing records, some of them of dubious distinction. Businessmen-climbers search for mountain peaks that have never been surmounted, marathoners go to Antarctica to run, and a procession of teenagers seeks to replicate Slocum’s circumnavigation (with the benefit of high-tech boats, push-button navigational equipment and satellite telephones).

Was Slocum like these people? Before I read Mr. Wolff’s book, I would have said no, that his motives and achievement were more pure and singular. Now I am unsure. Many modern-day adventurers are driven by ego. And ego probably played a role with Slocum, who was no doubt eager to demonstrate that he was, in spite of his many setbacks, exceptionally skilled at what he did, to the point, as he put it, of “neglecting all else.” And aren’t some contemporary adventurers individuals who, like Slocum, feel as if they have run out of other options?

Then again, perhaps Slocum was different. Maybe it was all about “the thing itself.” In November 1908, Slocum sailed from his home on Martha’s Vineyard to undertake a solo exploration of the Venezuelan coast and the Amazon. Somewhere along the way he disappeared. No one knows exactly what happened.

Mr. Knecht is the author of “The Proving Ground: The Inside Story of the 1998 Sydney of Hobart Race.”


Full article and photo:

The Other ‘G’ Spot

At the beginning of the 20th century the British psychologist Charles Spearman “discovered” the idea of general intelligence. Spearman observed that students’ grades in different subjects, and their scores on various tests, were all positively correlated. He then showed that this pattern could be explained mathematically by assuming that people vary in special abilities for the different tests as well as a single general ability—or “g”—that is used for all of them.

John Duncan, one of the world’s leading cognitive neuroscientists, explains Spearman’s work early in “How Intelligence Happens,” before moving on to his own attempts to locate the source of Spearman’s “g” in the brain. To get us grounded, Mr. Duncan also provides a wonderfully compact summary of brain architecture and function. Throughout the book, he makes it clear that his fascination with intelligent behavior has to do with how the brain brings it about—he leaves it to others to ponder things like the economic import of intelligence and how it is influenced by genes, upbringing, and education.

He also doesn’t waste time dilating on the question of what, precisely, we mean by “intelligence.” Defining terms is not the expertise of scientists, but their attempts can be thought-provoking. Two decades ago, the cognitive science and artificial-intelligence pioneer Allen Newell proposed that an entity should be considered intelligent to the extent that it uses all the information it has when making decisions. But according to that definition, a device as simple as a thermostat would have perfect intelligence—not terribly helpful when trying to understand human differences.

I have been doing research on intelligence for more than a decade, and I have to confess that I do not know of a perfect definition. But most psychologists consider intelligence a general ability to perform well on a wide variety of mental tasks and challenges. In everyday speech, it sometimes means roughly the same thing: We call someone “intelligent” if we believe that their mental abilities are generally high—not if they are skilled in just one narrow field.

Mr. Duncan’s early work on intelligence and the brain resolved an old paradox. Before imaging technologies like MRI were invented, neuropsychologists used IQ tests to determine what parts of the brain were damaged in patients suffering from strokes and other closed-head injuries. If the patient had trouble with the verbal parts of the test, the damage was probably in the left hemisphere; if the trouble was in the visual parts, the damage was probably in the back of the brain; and so on. But oddly, damage to the frontal lobes seemed to have very little effect on IQ—despite the frontal lobes’ constituting nearly 40% of the cerebral cortex.

Mr. Duncan found that patients with frontal-lobe damage were impaired on tests of “fluid intelligence” that, until recently, were not part of standard IQ tests. These tests measure the ability to solve abstract nonverbal problems in which prior knowledge of language or facts is of no help. For example, a “matrix reasoning” problem presents a grid of complex shapes with one empty space that the test-taker must fill by choosing the correct option from a set of up to eight alternatives. Such tests seem to reveal a raw ability to make optimal use of the information contained within a problem or situation.

Later, Mr. Duncan used PET scanning to measure the brain activity of people without brain damage as they solved problems that varied in difficulty. Regardless of content, as the tests got harder, the subjects made more use of areas in their frontal lobes, as well as in their parietal lobes, which are farther toward the back of the brain.

Mr. Duncan makes a convincing case that these brain areas constitute a special circuit that is crucial for both Spearman’s “g” and for intelligent behavior more generally. But his book elides the question of whether this circuit is also the source of IQ differences. That is, do people who score high on IQ tests use the frontal and parietal areas of their brains differently from people who score lower? The answer, discovered by other researchers, turns out to be yes.

There are other properties of the brain that contribute to “g,” including the speed of basic information-processing (measured by how fast people can press buttons in response to flashing lights) and even the total size of the brain (larger is better). One of the next steps in understanding “g” is to figure out how all these factors interact and combine to produce the wide range of differences we see in human intelligence. Mr. Duncan no doubt will be a key player in this effort, frontal and parietal lobes firing way.

Mr. Chabris is a psychology professor at Union College and the co-author, with Daniel Simons, of “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us” (Crown).


Full article and photo:

Magic by Numbers

I RECENTLY wound up in the emergency room. Don’t worry, it was probably nothing. But to treat my case of probably nothing, the doctor gave me a prescription for a week’s worth of antibiotics, along with the usual stern warning about the importance of completing the full course.

I understood why I needed to complete the full course, of course. What I didn’t understand was why a full course took precisely seven days. Why not six, eight or nine and a half? Did the number seven correspond to some biological fact about the human digestive tract or the life cycle of bacteria?

My doctor seemed smart. She probably went to one of the nation’s finest medical schools, and regardless of where she trained, she certainly knew more about medicine than I did. And yet, as I walked out of the emergency room that night with my prescription in hand, I couldn’t help but suspect that I’d just been treated with magic.

Certain numbers have magical properties. E, pi and the Fibonacci series come quickly to mind — if you are a mathematician, that is. For the rest of us, the magic numbers are the familiar ones that have something to do with the way we keep track of time (7, say, and 24) or something to do with the way we count (namely, on 10 fingers). The “time numbers” and the “10 numbers” hold remarkable sway over our lives. We think in these numbers (if you ask people to produce a random number between one and a hundred, their guesses will cluster around the handful that end in zero or five) and we talk in these numbers (we say we will be there in five or 10 minutes, not six or 11).

But these magic numbers don’t just dominate our thoughts and dictate our words; they also drive our most important decisions.

Consider my prescription. Antibiotics are a godsend, but just how many pills should God be sending? A recent study of antibiotic treatment published in a leading medical journal began by noting that “the usual treatment recommendation of 7 to 10 days for uncomplicated pneumonia is not based on scientific evidence” and went on to show that an abbreviated course of three days was every bit as effective as the usual course of eight.

My doctor had recommended seven. Where in the world had seven come from?

Italy! Seven is a magic number because only it can make a week, and it was given this particular power in 321 A.D. by the Roman emperor Constantine, who officially reduced the week from eight days to seven. The problem isn’t that Constantine’s week was arbitrary — units of time are often arbitrary, which is why the Soviets adopted the five-day week before they adopted the six-day week, and the French adopted the 10-day week before they adopted the 60-day vacation.

The problem is that Constantine didn’t know a thing about bacteria, and yet modern doctors continue to honor his edict. If patients are typically told that every 24 hours (24 being the magic number that corresponds to the rotation of the earth) they should take three pills (three being the magic number that divides any time period into a beginning, middle and end) and that they should do this for seven days, they will end up taking 21 pills.

If even one of those pills is unnecessary — that is, if people who take 20 pills get just as healthy just as fast as people who take 21 — then millions of people are taking at least 5 percent more medication than they actually need. This overdose contributes not only to the punishing costs of health care, but also to the evolution of the antibiotic-resistant strains of “superbugs” that may someday decimate our species. All of which seems like a rather high price to pay for fealty to ancient Rome.

Magic “time numbers” cost a lot, but magic “10 numbers” may cost even more. In 1962, a physicist named M. F. M. Osborne noticed that stock prices tended to cluster around numbers ending in zero and five. Why? Well, on the one hand, most people have five fingers, and on the other hand, most people have five more. It isn’t hard to understand why an animal with 10 fingers would use a base-10 counting system. But according to economic theory, a stock’s price is supposed to be determined by the efficient workings of the free market and not by the phalanges of the people trading it.

And yet, research shows that fingers affect finances. For example, a stock that closed the previous day at $10.01 will perform about as well as a stock that closed at $10.03, but it will significantly outperform a stock that closed at $9.99. If stocks close two pennies apart, then why does it matter which pennies they are? Because for animals that go from thumb to pinkie in four easy steps, 10 is a magic number, and we just can’t help but use it as a magic marker — as a reference point that $10.01 exceeds and $9.99 does not. Retailers have known this for centuries, which is why so many prices end in nine and so few in one.

The hand is not the only part of our anatomy that gives certain numbers their magical powers. The tongue does too. Because of the acoustic properties of our vocal apparatus, some words just sound bigger than others. The back vowels (the “u” in buck) sound bigger than the front vowels (the “i” in sis), and the stops (the “b” in buck) sound bigger than the fricatives (the “s” in sis). As it turns out, in well over 100 languages, the words that denote bigness are made with bigger sounds.

The sound a number makes can influence our decisions about it. In a recent study, one group was shown an ad for an ice-cream scoop that was priced at $7.66, while another was shown an ad for a $7.22 scoop. The lower price is the better deal, of course, but the higher price (with its silky s’s) makes a smaller sound than the lower price (with its rattling t’s).

And because small sounds usually name small things, shoppers who were offered the scoop at the higher but whispery price of $7.66 were more likely to buy it than those offered the noisier price of $7.22 — but only if they’d been asked to say the price aloud.

The magic that magic numbers do is all too often black. They hold special significance for terrestrial mammals with hands and watches, but they mean nothing to streptococcus or the value of Google. Which is why we should be suspicious when the steps to sobriety correspond to a half turn of our planet, when the eternal commandments of God correspond to the architecture of our paws and when the habits of highly effective people — and highly trained doctors — correspond to the whims of a dead emperor.

Daniel Gilbert is a professor of psychology at Harvard, the author of “Stumbling on Happiness” and the host of the television series “This Emotional Life.”


Full article and photo:

Everyman’s Gun

Sheer numbers have made the AK-47 the world’s primary tool for killing

The AK-47 is the most numerous and widely distributed weapon in history, with a name and appearance that are instantly recognized worldwide. Designed in the late 1940s for the Soviet Army, the Avtomat Kalashnikova 47 (“Automatic of Kalashnikov 1947″) became the universal weapon by the late 20th century, used by armies, militias and terrorists in practically every armed conflict, and by all sides in most of them. Even the United States has purchased mass quantities of AK-47s for friendly forces in Iraq and Afghanistan, and the armed services and the State Department train U.S. military and civilian personnel to handle and fire AK-47s in emergencies as part of their training for deployment to the war zones.

How did the AK-47 became as fundamental to contemporary warfare as Microsoft operating systems are to corporate computing? C.J. Chivers sets out to tell the story in “The Gun.” A Pulitzer Prize-winning reporter for the New York Times and a former infantry officer in the Marine Corps, he has seen the AK-47 in action while covering wars from Iraq and Afghanistan to Chechnya and Central Asia, and his experiences enhance his account.

The world’s most popular gun in a model with a side-folding stock

The AK-47′s origins are shrouded in the sort of mystery familiar to any historian researching Cold War issues in Russia. The Soviet state built up myths around its chosen heroes, and the man credited with the creation of the AK-47 was one of the leading figures in the pantheon. Mikhail Kalashnikov (born 1919) received not only the Soviet Union’s highest honors but also a suitable official story: a sergeant of modest origins wounded in battle against the Germans in 1941, who during months spent recovering in a hospital turns his previously unrecognized creative genius to designing a weapon that would better defend his homeland. The post-communist Russian government has kept up the accolades—the nonagenarian Mr. Kalashikov is now a lieutenant general—and Mr. Chivers received little cooperation in his search for authoritative information on the development of the AK-47.

What Mr. Chivers can relate with certainty is the weapon’s place in the evolution of warfare and its ongoing impact on the world. He presents the AK-47 as a final stage in the development of automatic weapons—a compact, simple to manufacture, easily handled and almost indestructible rapid-firing rifle—tracing its story from the first attempts to create machine guns a century earlier. He describes at length Richard Gatling’s invention of hand-cranked rapid-fire weapons during the Civil War. The Gatling gun was little used during that conflict, and the U.S. Army was slow to adopt it afterward, but European armies used Gatling guns to devastating effect in colonial wars. Then Hiram Maxim’s fully automatic machine gun appeared in the 1880s. Capable of firing 600 rounds per minute with the press of a trigger, the Maxim gun in its many derivations (the German Spandau, the British Vickers, the Russian Sokolov, and others) created the dense walls of fire that defined the trench warfare of World War I.

But the Maxim was unwieldy for use by solitary soldiers, and even before the war ended the search for an effective one-man automatic weapon was underway. Germany introduced the MP18, a nine-pound submachine gun designed by Hugo Schmeisser. By using low-powered pistol ammunition, Schmeisser had made a small and portable weapon, but one with a severely limited effective range. Interest in such weapons dwindled after 1918, and the first practical automatic rifle did not appear until late in World War II. The StG44—also designed by Schmeisser and dubbed the “assault rifle” (Sturmgewehr) by Hitler himself—appeared too late for wide distribution during the war. But as Mr. Chivers notes, the StG44 may have had a direct influence on the AK-47.

The Soviet Union began trying to design an automatic rifle just after World War II ended. Mr. Kalashnikov was an obscure 26-year-old sergeant with little formal education and only a few years of experience designing weapons. He headed one of several teams of engineers competing to win the contest to design the automatic rifle, most of them led by established arms designers who had won high honors for their work during the war. After two years of competitive tests and design modifications, the AK-47 emerged the winner.

Mr. Chivers emphasizes that competition between teams of designers and a long back-and-forth process of modification and improvement under army supervision—not the individual brilliance of one man—created the AK-47. Borrowing from the StG44 may have occurred as well. The two weapons share many distinctive design features: the gas piston above the barrel that powers the rifle’s action, the curved 30-round magazine, the stock meant for controlling the weapon when firing on full automatic. Suspicions that the AK-47 was based on the StG44 are reinforced by the fact that Hugo Schmeisser was captured by the Soviet Army in 1945 and spent years in the city of Izhevsk, the main center of AK-47 production to this day.

Regardless of the details of its origins, the AK-47 brought the spread of automatic firepower to its logical conclusion. Like the StG44, the AK-47 used an intermediate-size cartridge scaled down from the rifle rounds of the two world wars which gave it sufficient range for any realistic battlefield target but the minimal recoil to make possible automatic fire from a one-man, hand-held weapon.

The Soviet Army was also obsessed with simplicity and ruggedness in its weapons and so the winning design used a minimum of parts, was built far more strongly than necessary and was constructed with a relatively loose fit between its major moving parts, allowing the AK-47 to continue firing even when clogged with powder residue and dirt.

The result: a practically foolproof weapon that works in the most extreme conditions despite neglect and abuse. Mass production of the AK-47 began by 1950, 15 years before the U.S. introduced its own automatic rifle, the M-16. In addition to cranking out AK-47s by the millions, the Soviet Union set up factories to produce them in Warsaw Pact countries and the People’s Republic of China, and eventually in states such as Egypt and Iraq, where the Soviets sought influence. The outpouring of AK-47s is estimated at more than 100 million and still rising—one for every 70 people in the world and more than 10 times the number of M-16s produced. Mr. Chivers notes that this vast supply of AK-47s has made them widely and cheaply available—readily purchased for less than $200 (including delivery by air) in the international arms market.

Mr. Chivers’s efforts to put the AK-47 in a broad historical context are both the great strength and great weakness of “The Gun.” He devotes the first several chapters to the history of the machine gun and biographies of Gatling and Maxim; the book’s longest chapter concerns the M-16′s origins and its early problems during the Vietnam War. The reader spends fully half of the book not reading about the AK-47 at all. Yet the digressive chapters are the more interesting, displaying impressive research—of a kind not possible on the AK-47 and Mikhail Kalashnikov—and deft descriptions of individuals and their experiences.

The author shows equal skill in discussing how lives were changed by the AK-47. He writes about a Hungarian who during the 1956 Soviet invasion became one of the first insurgents to use the AK-47; East Germans shot trying to escape over the Berlin Wall; American soldiers under fire in Vietnam; Israeli athletes murdered in the Munich Olympic Village in 1972; child soldiers in Uganda’s Lord’s Resistance Army; and a Kurdish bodyguard wounded during an attempted assassination in northern Iraq in 2002. Mr. Chivers reminds the reader constantly of the human consequences of the firepower that the AK-47 has made cheap and widely available.

Sheer numbers have made the AK-47 the world’s primary tool for killing—an “everyman’s gun,” Mr. Chivers calls it. The proliferation of weapons of mass destruction has for decades been a primary U.S. and international concern, and much press attention in recent years has been focused on the fashionable campaign against landmines. Mr. Chivers focuses our attention on an ordinary item that has been vastly more destructive and done more to define the character of warfare today than any other weapon.

Mr. Kim, a lawyer, recently returned from a year in Iraq working for the U.S. Treasury Department.


Full article and photo:

A Kimjongunia would smell as sweet

SOMETIMES there are Kimilsungia exhibitions. Sometimes there are Kimjongilia ones. Citizens of Pyongyang are also treated to combined Kimilsungia and Kimjongilia shows. One such got underway at the beginning of this month, at the Kimilsungia-Kimjongilia Exhibition House: innumerable pots filled with the same two kinds of plant, a monotony alleviated only by a guide’s prediction that North Korea will one day get a third variety.

Kim Jong Il has resisted his late father Kim Il Sung’s predilection for studding North Korea with statues of himself (Pyongyang’s first of Kim Jong Il was reportedly unveiled earlier this year, 16 years after he succeeded his father as North Korea’s leader). Instead, Kim Jong Il says it with flowers. Foreign correspondents invited in for celebrations of the ruling party’s 65th birthday on October 10th saw them everywhere: on billboards, on huge digital screens erected for the festivities on Kim Il Sung Square, in a cascading display in the hotel lobby and in endless profusion at the exhibition (along with huge portraits of the two Kims).

Kim Il Sung officially remains president, against the odds, but the Kimjongilia, a giant red begonia, somehow leaves its visual stamp on Pyongyang even more pervasively than the Kimilsungia, a normal-sized purple orchid. It might be said that the Kimjongilia’s bouffant petals echo the hairstyle of North Korea’s eponymous ruler, but a guide at the exhibition has a more politically correct explanation of the flower’s appearance. Its bright red hue, she says, reflects Kim Jong Il as a “person of passion, with a very strong character”.  

A journalist asked whether different temperature requirements made it difficult to keep begonias and orchids together. “We grow them with our hearts”, said the guide.  In August North Korea’s Kimilsungia and Kimjongilia Research Centre came up with what might be a more reliable way of getting the best out of the Kimjongilia. After “years of research”, said the state news agency KCNA, it devised a chemical agent that could lengthen the blooming period by a week in summer or by 20 days in winter.

Interspersed among the potted plants were occasional models of items representing the two leaders’ great achievements: “a nuclear weapon” was how the guide described one missile-like object. Another was a model of a rocket supposedly carrying a satellite into space (the actual rocket blew up after launch in April 2009, but North Korean officials resolutely insist that it successfully put a satellite into orbit).  Another represented a hand grenade, rifle and rocket launcher. But, no doubt deliberately, it was the Kimjongilia’s redness that struck the eye.

One display was of potted Kimjongilias supposedly donated by foreign diplomatic missions. China’s was uppermost, together with a photograph of Kim Jong Il shaking hands with China’s president, Hu Jintao. Individual European countries were conspicuous by their absence, but there was one pot plant there in the name of the European Union. (The North Koreans had tried to gouge each of the seven European embassies in Pyongyang for flower contributions—though hard currency, it was understood, would do nicely in lieu. The single Kimjongilia was their cost-saving solution.)

Oddly for plants that have acquired such crucial political significance in North Korea—the army has its own huge breeding centre for them—both are actually foreign creations. The Kimilsungia was presented in 1965 by Indonesia’s founding president, Sukarno, and the Kimjongilia arrived in 1988, courtesy a Japanese botanist. Kim Jong Un, Kim Jong Il’s anointed successor, who was seen by foreign journalists for the first time on October 9th and 10th, has yet to acquire a flower. “In future we will have one”, assures the guide. 


Full article and photo:

Read This Review or . . .

Forgive me if I open on a personal note: The other night I started laughing so hard I had to leave the room. My daughter was trying to study, and I could see she was getting alarmed. It was kind of scary to me, too, if you want to know the truth. For a moment there, as I made it into the bathroom and shut the door, I thought my body was approaching organ failure, not that I know what organ failure feels like, thank God. You hear people say things like “I laughed so hard I cried” and “I nearly fell out of my chair,” but I had gone well beyond the crying stage by the time my metabolism began to return to equilibrium. And then I realized that I hadn’t laughed so hard in 35 years, since I was a teenager, reading National Lampoon.

American men of a certain age will recall the feeling. What I’d been reading the other night was, no coincidence, National Lampoon—specifically the monologue of a fictional New York cabbie named Bernie X. He was the creation of Gerald Sussman, a writer and editor for the Lampoon from its early days in the 1970s to its sputtering death in 1998. Sussman, it is said, wrote more words for the magazine than any other contributor. I’m sorry I can’t quote any of his pieces here. They’re filthy.

If I’d gone ahead and died the other night, my wife would have known whom to sue. “Drunk Stoned Brilliant Dead,” in which Bernie X appears, is the work of Rick Meyerowitz, himself a valued contributor to the Lampoon who had the bright idea to gather his favorite pieces from the magazine into a handsomely produced coffee-table book. Mr. Meyerowitz is best known as the man who painted Mona Gorilla, a shapely, primly dressed primate with come-hither eyes and a smile far more unsettling than Leonardo’s original. That ape may be the most celebrated magazine illustration of the 1970s, its only competition being the Lampoon cover from January 1973. The photograph showed a cowering pup with a revolver to its head next to the timeless tagline: “If You Don’t Buy This Magazine, We’ll Kill This Dog.”

As an illustrator, Mr. Meyerowitz has a bias toward pieces with a strong graphic element. This is altogether fitting. The production values of the earliest issues of National Lampoon were rag-tag, but with the hiring of the art director Michael Gross and gifted painters and designers like Mr. Meyerowitz and Bruce McCall, the presentation of a piece of writing on the page became as essential to the joke as the writing itself.

In parodies of everything from comic books to Babylonian hieroglyphs, the Lampoon technique was a dead-on verisimilitude, exquisitely detailed. No matter how absurd the jokes were, how incongruous, abstract, whimsical or—I repeat myself—filthy, they were delivered with the straightest possible face. Great performers, old showfolk say, never let you see them sweat. National Lampoon writers never let you hear them chuckle.

The classic marriage of word and picture, which Mr. Meyerowitz reprints in full, was a 10-page spoof of travel magazines titled “Stranger in Paradise.” The soft-focus prose of the travel writer (“Wild fruits hang from the branches, waiting to be plucked”) transports us to a lush South Sea island where a “modern day Robinson Crusoe” lives in idyllic retirement. Sumptuous, full-color photographs show him dodging the surf, frolicking with the natives, sunbathing nude on the beach. Our Crusoe is Adolf Hitler, complete with the toothbrush mustache, the penetrating stare and a bottom as pale as a baby’s. No one who has seen the sunbathing photograph has ever been able to forget it. I’ve tried.

Amid the belly laughs was an irony so cool that it could sink to absolute zero. “Making people laugh is the lowest form of humor,” said Michael O’Donoghue, who founded the magazine with some Harvard pals in 1969 and later gained TV fame with “Saturday Night Live.” And it’s true that you—meaning me and my friends —sometimes had trouble finding the joke. Mr. Meyerowitz includes all 12,000 words of a parody by Henry Beard, another founding editor, of a typically grim law-review article. It’s called “Law of the Jungle,” by which he means the real law of the jungle, covering torts, trusts and property rights as understood by hippos and boa constrictors. With its high rhetoric, labyrinthine arguments and endless footnotes, it is as flawlessly rendered as any parody ever written—so precise that it becomes as tedious as the articles it was meant to send up.

You have to be very good to fail in this way, and nobody could have doubted the vast talent assembled behind that grinning gorilla. In the 1970s, however, old-fashioned moralists (soon to be extinct) complained about a deep vein of nihilism running through the magazine. Out in the suburbs we irony-soaked, pseudo-sophisticated teenage boys could only roll our eyes at the tut-tutting. We knew, or thought we did, that every sex joke in Bernie X’s monologues was redeemed by the tonally perfect rendering of the cabbie’s patois (I don’t think we used the word patois).

But from this distance the justice of the moralists’ charge looks glaringly obvious. In their more pompous moments, the Lampoon editors could have defended an appallingly tasteless joke about, say, the My Lai massacre or the Kennedy assassination as an effort to shake the bourgeois out of their complacency. Now it just looks tasteless or worse: an assault on the very notion of tastelessness, on our innate belief that sometimes some subjects should be off-limits.

Tony Hendra, one of the most pretentious of the original editors—quite a distinction in an office full of Harvard boys—writes here of the magazine’s “unique high-low style of comedy, incredible disgustingness paired with intellectual and linguistic fireworks.” The juxtaposition, as they proved every month and as Mr. Meyerowitz’s collection reconfirms, can be side-splitting. The mix is hard to sustain, though, and it makes for a terrible legacy. The high, being so hard to pull off, inevitably fades away, leaving only the low. Gresham’s Law—the bad driving out the good—holds true for comedy too.

With a few exceptions—the Onion, a sitcom or two—this seems to be where American humor finds itself now. You have only to wade into the opening minutes of any Will Ferrell movie to be rendered numb by the body-part jokes, unredeemed by the Lampoon’s intellectual or linguistic fireworks. The unhappy state of humor today gives this dazzling book the feel of a nostalgic excursion—back to a purer era, when all you had to do to make someone laugh was threaten to shoot a dog.

Mr. Ferguson is a senior editor at the Weekly Standard.


Full article and photos:

The Traveling Salesmen of Climate Skepticism

‘Science as the Enemy’

A dried-up reservoir in Spain (May 2005 photo): The professional skeptics tend to use inconsistent arguments. Sometimes they say that there is no global warming. At other times, they point out that while global warming does exist, it is not the result of human activity.

A handful of US scientists have made names for themselves by casting doubt on global warming research. In the past, the same people have also downplayed the dangers of passive smoking, acid rain and the ozone hole. In all cases, the tactics are the same: Spread doubt and claim it’s too soon to take action.

With his sonorous voice, Fred Singer, 86, sounded like a grandfather explaining the obvious to a dim-witted child. “Nature, not human activity, rules the climate,” the American physicist told a discussion attended by members of the German parliament for the business-friendly Free Democratic Party (FDP) three weeks ago.

Marie-Luise Dött, the environmental policy spokeswoman for the parliamentary group of Angela Merkel’s center-right Christian Democratic Union (CDU), also attended Singer’s presentation. She said afterwards that it was “extremely illuminating.” She later backpedaled, saying that her comments had been quoted out of context, and that of course she supports an ambitious climate protection policy — just like Chancellor Merkel.

Merkel, as it happens, was precisely the person Singer was trying to reach. “Our problem is not the climate. Our problem is politicians, who want to save the climate. They are the real problem,” he says. “My hope is that Merkel, who is not stupid, will see the light,” says Singer, who has since left for Paris. Noting that he liked the results of his talks, he adds: “I think I achieved something.”

Salesman of Skepticism

Singer is a traveling salesman of sorts for those who question climate change. On this year’s summer tour, he gave speeches to politicians in Rome, Paris and the Israeli port city of Haifa. Paul Friedhoff, the economic policy spokesman of the FDP’s parliamentary group, had invited him to Berlin. Singer and the FDP get along famously. The American scientist had already presented his contrary theories on the climate to FDP politicians at the Institute for Free Enterprise, a Berlin-based free-market think tank, last December.

Singer is one of the most influential deniers of climate change worldwide. In his world, respected climatologists are vilified as liars, people who are masquerading as environmentalists while, in reality, having only one goal in mind: to introduce socialism. Singer wants to save the world from this horror. For some, the fact that he made a name for himself as a brilliant atmospheric physicist after World War II lends weight to his words.

Born in Vienna, Singer fled to the United States in 1940 and soon became part of an elite group fighting the Cold War on the science front. After the collapse of the Soviet Union, Singer continued his struggle — mostly against environmentalists, and always against any form of regulation.

Whether it was the hole in the ozone layer, acid rain or climate change, Singer always had something critical to say, and he always knew better than the experts in their respective fields. But in doing so he strayed far away from the disciplines in which he himself was trained. For example, his testimony aided the tobacco lobby in its battle with health policy experts.

‘Science as the Enemy’

The Arlington, Virginia-based Marshall Institute took an approach very similar to Singer’s. Founded in 1984, its initial mission was to champion then US President Ronald Reagan’s Strategic Defense Initiative (SDI), better known as “Star Wars.” After the fall of the Iron Curtain, the founders abruptly transformed their institute into a stronghold for deniers of environmental problems.

“The skeptics thought, if you give up economic freedom, it will lead to losing political freedom. That was the underlying ideological current,” says Naomi Oreskes, a historian of science at the University of California, San Diego, who has studied Singer’s methods. As scientists uncovered more and more environmental problems, the skeptics “began to see science as the enemy.”

Oreskes is referring to only a handful of scientists and lobbyists, and yet they have managed to convince many ordinary people — and even some US presidents — that science is deeply divided over the causes of climate change. Former President George H.W. Bush even referred to the physicists at the Marshall Institute as “my scientists.”

Whatever the issue, Singer and his cohorts have always used the same basic argument: that the scientific community is still in disagreement and that scientists don’t have enough information. For instance, they say that genetics could be responsible for the cancers of people exposed to secondhand smoke, volcanoes for the hole in the ozone layer and the sun for climate change.

Cruel Nature

It almost seems as if Singer were trying to disguise himself as one of the people he is fighting. With his corduroy trousers, long white hair and a fish fossil hanging from a leather band around his neck, he comes across as an amiable old environmentalist. But the image he paints of nature is not at all friendly. “Nature is much to be feared, very cruel and very dangerous,” he says.

At conferences, Singer likes to introduce himself as a representative of the Nongovernmental International Panel on Climate Change (NIPCC). As impressive as this title sounds, the NIPCC is nothing but a collection of like-minded scientists Singer has gathered around himself. A German meteorologist in the group, Gerd Weber, has worked for the German Coal Association on and off for the last 25 years.

According to a US study, 97 percent of all climatologists worldwide assume that greenhouse gases produced by humans are warming the Earth. Nevertheless, one third of Germans and 40 percent of Americans doubt that the Earth is getting warmer. And many people are convinced that climatologists are divided into two opposing camps on the issue — which is untrue.

So how is it that people like Singer have been so effective in shaping public opinion?

Experience Gained Defending Big Tobacco

Many scientists do not sufficiently explain the results of their research. Some climatologists have also been arrogant or have refused to turn over their data to critics. Some overlook inconsistencies or conjure up exaggerated horror scenarios that are not always backed by science. For example, sloppy work was responsible for a prediction in an Intergovernmental Panel on Climate Change (IPCC) report that all Himalayan glaciers would have melted by 2035. It was a grotesque mistake that plunged the IPCC into a credibility crisis.Singer and his fellow combatants take advantage of such mistakes and utilize their experiences defending the tobacco industry. For decades, Big Tobacco managed to cast doubt on the idea that smoking kills. An internal document produced by tobacco maker Brown & Williamson states: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.”

In 1993, tobacco executives handed around a document titled “Bad Science — A Resource Book.” In the manual, PR professionals explain how to discredit inconvenient scientific results by labeling them “junk.” For example, the manual suggested pointing out that “too often science is manipulated to fulfill a political agenda.” According to the document: “Proposals that seek to improve indoor air quality by singling out tobacco smoke only enable bad science to become a poor excuse for enacting new laws and jeopardizing individual liberties.”

‘Junk Science’

In 1993, the US Environmental Protection Agency (EPA) published what was then the most comprehensive study on the effects of tobacco smoke on health, which stated that exposure to secondhand smoke was responsible for about 3,000 deaths a year in the United States. Singer promptly called it “junk science.” He warned that the EPA scientists were secretly pursuing a communist agenda. “If we do not carefully delineate the government’s role in regulating … dangers, there is essentially no limit to how much government can ultimately control our lives,” Singer wrote.

Reacting to the EPA study, the Philip Morris tobacco company spearheaded the establishment of “The Advancement of Sound Science Coalition” (TASSC). Its goal was to raise doubts about the risks of passive smoking and climate change, and its message was to be targeted at journalists — but only those with regional newspapers. Its express goal was “to avoid cynical reporters from major media.”

Singer, Marshall Institute founder Fred Seitz and Patrick Michaels, who is now one of the best known climate change skeptics, were all advisers to TASSC.

Not Proven

The Reagan administration also appointed Singer to a task force on acid rain. In that group, Singer insisted that it was too early to take action and that it hadn’t even been proven yet that sulfur emissions were in fact the cause. He also said that some plants even benefited from acid rain.

After acid rain, Singer turned his attention to a new topic: the “ozone scare.” Once again, he applied the same argumentative pattern, noting that although it was correct that the ozone concentration in the stratosphere was declining, the effect was only local. Besides, he added, it wasn’t clear yet whether chlorofluorocarbons (CFCs) from aerosol cans were even responsible for ozone depletion.

As recently as 1994, Singer claimed that evidence “suggested that stratospheric chlorine comes mostly from natural sources.” Testifying before the US Congress in 1996, he said there was “no scientific consensus on ozone depletion or its consequences” — even though in 1995 the Nobel Prize had been awarded to three chemists who had demonstrated the influence of CFCs on the ozone layer.

The Usual Suspects

Multinational oil companies also soon adopted the tried-and-true strategies of disinformation. Once again, lobbying groups were formed that were designed to look as scientific as possible. First there was the Global Climate Coalition, and then ExxonMobil established the Global Climate Science Team. One of its members was lobbyist Myron Ebell. Another one was a veteran of the TASCC tobacco lobby who already knew the ropes. According to a 1998 Global Climate Science Team memo: “Victory will be achieved when average citizens ‘understand’ (recognize) uncertainties in climate science.”

It soon looked as though there were a broad coalition opposing the science of climate change, supported by organizations like the National Center for Policy Analysis, the Heartland Institute and the Center for Science and Public Policy. In reality, these names were often little more than a front for the same handful of questionable scientists — and Exxon funded the whole illusion to the tune of millions of dollars.

It was an excellent investment.

In 2001, the administration of then-President George W. Bush reneged on previous climate commitments. After that, the head of the US delegation to the Kyoto negotiations met with the oil lobbyists from the Global Climate Coalition to thank them for their expertise, saying that President Bush had “rejected Kyoto in part based on input from you.”

Singer’s comrade-in-arms Patrick Michaels waged a particularly sharp-tongued campaign against the phalanx of climatologists. One of his books is called: “The Satanic Gases: Clearing the Air about Global Warming.” Michaels has managed to turn doubt into a lucrative business. The German Coal Association paid him a hefty fee for a study in the 1990s, and a US electric utility once donated $100,000 to his PR firm.

Inconsistent Arguments

Both Michaels and Ebell are members of the Cooler Heads Coalition. Unlike Singer and Seitz, they are not anti-communist crusaders from the Cold War era, but smooth communicators. Ebell, a historian, argues that life was not as comfortable for human beings in the Earth’s cold phases than in the warm ones. Besides, he adds, there are many indications that we are at the beginning of a cooling period.

The professional skeptics tend to use inconsistent arguments. Sometimes they say that there is no global warming. At other times, they point out that while global warming does exist, it is not the result of human activity. Some climate change deniers even concede that man could do something about the problem, but that it isn’t really much of a problem. There is only one common theme to all of their prognoses: Do nothing. Wait. We need more research.

People like Ebell cannot simply be dismissed as cranks. He has been called to testify before Congress eight times, and he unabashedly crows about his contacts at the White House, saying: “We knew whom to call.”

Ebell faces more of an uphill battle in Europe. In his experience, he says, Europe is controlled by elites who — unlike ordinary people — happen to believe in climate change.

Einstein on a Talk Show

But Fred Singer is doing his best to change that. He has joined forces with the European Institute for Climate and Energy (EIKE). The impressive-sounding name, however, is little more than a P.O. box address in the eastern German city of Jena. The group’s president, Holger Thuss, is a local politician with the conservative Christian Democratic Union (CDU).

Hans Joachim Schellnhuber, director of the respected Potsdam Institute for Climate Impact Research and an adviser to Chancellor Merkel on climate-related issues, says he has no objection to sharing ideas with the EIKE, as long as its representatives can stick to the rules of scientific practice. But he refuses to join EIKE representatives in a political panel discussion, noting that this is precisely what the group hopes to achieve, namely to create the impression among laypeople that experts are discussing the issues on a level playing field.

Ultimately, says Schellnhuber, science has become so complicated that large segments of the population can no longer keep up. The climate skeptics, on the other hand, are satisfied with “a desire for simple truths,” Schellnhuber says.

This is precisely the secret of their success, according to Schellnhuber, and unfortunately no amount of public debate can change that. “Imagine Einstein having to defend the theory of relativity on a German TV talk show,” he says. “He wouldn’t have a snowball’s chance in hell.”


Full article and photo:,1518,721846,00.html

The Beagle Vanishes

In the second column we freed the circle from being a flat-on geometric shape so that it could move out into space as the ellipse. We’ve used it to help us draw a pot and to see the roundness of forms, and now we’re going to use that ellipse to fly us into an imaginary scene that introduces us to the principles of perspective.

We follow that flying Frisbee of an ellipse as it settles down as a perfect little pond on a vast Kansas prairie. A man walks out onto that plain with a picnic basket, a blanket and a beagle. He sits down on his blanket to admire the view and the improbably perfect pond.


The beagle catches the scent of the little rabbit on the other side of the pond and takes off after it. Ignoring the shouts of his master, the dog paddles through the pond, bounds across the vast expanse and disappears over the horizon. (Two nice farmers in the next town find him and call the ASPCA.)


The runaway beagle’s trajectory has given us a vanishing point, the first element in the geometry of perspective: the point on the horizon towards which objects in the picture converge. In the first drawing, the man is sitting down so his viewpoint is low (and let’s imagine that we’re in a slightly elevated position behind him), and because the horizon line occurs roughly at eye level, the horizon line is also low and all the shapes appear relatively flattened out. Also, in one point perspective, all the lines running from left to right are parallel to the horizon line.

In the second diagram, where the man stands up to call his dog, he sees the scene from a higher viewpoint and thus the horizon line is also higher within the rectangle of our image. Now the blanket and the pool become wider, front to back, as does the perceived distance between the man’s feet and the horizon. It’s just as the ellipses in the drawing of the pot became wider in the same way the more we looked down on them.

As useful as one-point perspective is in drawing a Kansas picnic or highways in the Nevada dessert leading straight to the sunset, other scenes require more complicated angles. For these images we need two-point perspective.

Let’s start by going back to the circle and plotting it in two-point perspective so we know how to make an official ellipse. It may not be as fluid or interesting as your free-hand ellipse, but you should know how to do it so you can move on with your life.

Get Giotto to draw you a circle. Or use a compass or trace around a glass. Then, with T square and ruler, draw a box around the circle. Draw a horizon line above the box. Now draw a vertical line through the middle of the box up to the horizon line (A). Draw another line bisecting the box horizontally (B). Then draw two lines from corner to corner to bisect the box diagonally. Now draw two more vertical lines through the points where the diagonals bisect the circle (C and D). This will give you four intersection points, E, F, G and H, around the circumference of the circle.

Are you still with me? Now, something a little easier to do. Choose two vanishing points, left and right (J and I), along the horizon and roughly equidistant from the center. Now draw lines from the right vanishing point (I) to the top corners of the box and to the intersection points C, A and D. Count the lines you have just made — there should be five.

Now, by drawing a line from vanishing point J to the right corner of the box, we are crossing four lines (check the diagram) that give us important intersections. The first is intersection K , the point at which you can make a horizontal line to complete the perspective square. Think of it as a flap bending away from the bottom box. I’ll get to the second intersection in a minute. The third is intersection L, which shows you where to make another horizontal line to establish the center of the perspective box, and marking both left and right intersections “point B” to match how they are identified in the lower box.

The second and fourth intersections, E and H, along with intersections G, F, B and A, match the same points in the lower box and give you the theoretical means to draw a circle seen in perspective. The theory is that you simply connect these eight points (A and B are doubled) with curved lines and, voila, you have the correct ellipse. However, I find it takes a certain amount of fiddling to swing these curves around the corners to make them look right. In other words, you already have to have some sense of what a perspective circle looks like in order to carry out this last bit of the procedure. Whew! Work on your free-hand ellipses.

Now, back to our Kansas prairie picnic. This time, we’ll let our beagle run off at an angle, which will give us a vanishing point, A, to the right of our picture frame. Establish the left-hand vanishing point, B, along the horizon at roughly the same distance from the center as the first vanishing point (as you did in plotting the perspective circle). Choose a point in the lower left of the picture frame along the angle of the first vanishing point for the corner of the blanket, (C). Now join that point to the second vanishing point. This gives you the angles of two sides of the blanket. Now choose two points that seem reasonable for the width (D), and length (E), of the blanket and join those points to the appropriate vanishing points. Now you have completed the perspective view of the rectangle of the blanket as it has turned to match the trajectory of the beagle’s flight.

Since we’ve spent so much time plotting the circle in perspective (a.k.a. the ellipse), let’s turn our pond into a little house on the prairie to get some practice with rectilinear shapes.

First, choose a point (F) above and to the right of the blanket for the near corner of the house (as you did with the blanket), and extend lines along the vanishing point angles to establish the length (H) and width (G) of the house. From the point F draw a vertical line to establish the height (I) of the house.

Using the vanishing point trajectories you can now complete the basic box of the house. In order to establish the center points of the two visible walls, make a horizontal base line, J to K, running through point F at the corner of the house. Using lines running from both vanishing points and through the corners of the house establish the width and length along the base line, J to K. Measure the halfway points along that line. By extending vanishing point lines back to the house from the two midpoints you can figure out where to put the centered roof peak and the centered door, and where to center windows in the remaining spaces. You have now completed a scene of a blanket and a house viewed from the same vantage point

My personal take on perspective is that one should understand enough of the basic idea of vanishing points to substantiate how you see objects and buildings recede in space in your everyday life, so that it helps you to draw a convincing image without having to do a lot of plotting. An easy little exercise you can do is to draw a rectangle with a horizon line and then, free-hand, draw a series of boxes aligning with the same one or two vanishing points. It will help you, too, with understanding what things look like when they are low or high in an image field.


For those of you anxious to move deeper into the labyrinth of perspective, I offer this little taste of what you’re in for.

But to see what a great artist can create playing around with very simple perspective, I include this painting, “Melancholy and Mystery of a Street,” by Giorgio de Chirico.

Giorgio de Chirico
Giorgio de Chirico’s “Melancholy and Mystery of a Street”
James McMullan, New York Times

The Frisbee of Art

Pope Boniface VIII was looking for a new artist to work on the frescoes in St. Peter’s Basilica, so he sent a courtier out into the country to interview artists and collect samples of their work that he could judge. The courtier approached the painter Giotto and asked for a drawing to demonstrate his skill. Instead of a study of angels and saints, which the courtier expected, Giotto took a brush loaded with red paint and drew a perfect circle. The courtier was furious, thinking he had been made a fool of; nonetheless, he took the drawing back to Boniface. The Pope understood the significance of the red circle, and Giotto got the job.

This is often told as the story of the ultimate test of drawing, and I don’t dispute that it is very hard to draw a perfect circle. However, I would argue that it is much more useful to be able to draw a circle existing in space, a circle seen turned at various angles as we usually encounter it in the world. We need to be able to draw an ellipse.

The ellipse is the Frisbee of art, the circle freed from its flatness that sails out into imagined space tilting this way and that and ending up on the top of the soup bowl and silver cup in Jean-Baptiste Chardin’s still life or, imagine this, on the wheels of the speeding Batmobile.

Jean-Baptiste Chardin The Silver Goblet
Pablo Picasso,  Mother and Child

Once you tune into ellipses, you will begin to see them everywhere: in art, as in the Chardin painting, or in life, in your morning coffee cup or the table top on which the cup sits. The ellipse is also implicit in every cylindrical form whether or not we see its end exposed (as it would be in a can or a cup or a length of pipe). Just look at the Picasso “Mother and Child.” Highlighting the ellipses, as I have done, helps you to understand the basic roundness of those limbs, encouraging you to see and to draw with a volumetric rather than a flat perception of what you are observing. So the ellipse is important because it exists in so many places as an actual shape, and because it is “buried” in so many round forms that we are likely to draw.

The challenge of drawing an ellipse is that it must be done with enough speed to engage the natural “roundingness” of your reflexes. In essence, you are deciding to make a particular shaped ellipse and then letting your hand and wrist move autonomously to accomplish the job. Much of what you are practicing in learning to draw is engaging your fine motor skills in this way, so that the hand moves to do your bidding without a ”controlling” space between deciding to make a particular line and the hand moving to do it. Before this kind of almost simultaneous cooperation between your brain and your hand occurs, you will tend to worry the line out in slow incremental steps. In this hand-eye coordination, drawing is an athletic activity that benefits from practice, like golf or tennis. On a page of your drawing pad, make various kinds of ellipses as a warmup for the exercise below. Keep the movement of your hand fluid and relatively fast.

Let’s begin by drawing a pot. A few words on perspective are in order before we start. Think of looking at a can of soda on a table in front of you: the implied ellipse at the bottom of the can where it sits on the table is rounder that the ellipse at the top of the can because you are looking down on it more. If it’s easier to observe this in a straight-sided drinking glass then use that as an example. This describes the basic idea, illustrated by the diagram below, that as you look down on an ellipse you see more of it than if the ellipse is higher up relative to your eye level.

Start your drawing by looking at the top of the pot and making an ellipse as close as you can to the shape you see. Give it a couple of tries if you need to. Now bring down two outside-edge lines to where the pot bulges out. Add a center line all the way down to where you think the bottom of the pot is. Now add two horizontal lines, one at the bulge point and one a little above the bottom of the pot. These lines will guide you as you make the two ellipses that describe the cylindrical shape of the pot. Make the ellipse at the bulge point a little rounder than the top ellipse. Make the bottom ellipse rounder still. Now, looking at the outside edges at the bottom of the pot, draw connecting curves between the two ellipses, trying to capture the nature of the shapes in the way that the bulge is more pronounced at the top, like shoulders, and then curves inward.

Congratulations! You have now made a basic linear drawing of a pot. I encourage you to strengthen your understanding of analyzing round forms by doing an additional exercise; choose a basically cylindrical object from your surroundings and draw it using ellipses in the same way I have just demonstrated. Because you will be studying an object in three dimensions rather than in a photograph it may be easier to see the ellipses. I have photographed a group of household objects to suggest some of the things that you might consider.

Household Objects

In the next column I’ll show the same pot we’ve just drawn in a more dramatic light to make it easier to understand its volumes, so you can see how the direction the light comes from affects the shadows. You’ll have an opportunity to practice the logic and art of shading.

James McMullan, New York Times


Full article and photos:

Serendipitous Connections

Innovation occurs when ideas from different people bang against each other.

In the physical universe, chemical reactions are limited by the molecules that are close to one another and the ease with which they can meet up. You can run an electrical current through a chemical bath and synthesize the basic amino acids that form the building-blocks of human life. You cannot synthesize a llama.

So in the field of human knowledge. New ideas are limited by the supply of existing ideas and by the speed with which those ideas can combine to form new ones. The ancients could build accurate astronomical models but could not generate a theory of gravity; they needed better telescopes, better measurements and a theory of calculus. “If I see farther than other men,” said Isaac Newton, “it is because I stood on the shoulders of giants.”

This idea, the importance of proximity, is one of the first concepts that Steven Johnson introduces in “Where Good Ideas Come From.” In many ways, it is the heart of the book, defining not just what innovations are possible at a given time, but also how innovation gets done within the current frontiers of human knowledge. In Mr. Johnson’s telling, innovation is most likely to occur when ideas from different people, and even different fields, are rapidly banging against one another; every so often the ideas will spawn some radical new combination. The most innovative institutions will create settings where ideas are free to move, and connect, in unexpected ways.

Anyone who has written about business or science knows how often stories about inventions start with some chance encounter: “I was sitting next to this guy on an airplane, and he said . . .” The pacemaker was invented by an electronics technician who happened to have lunch with two heart surgeons; McDonald’s became a national chain after Ray Kroc stopped by the original hamburger shack to sell milkshake machines and realized that he had stumbled onto a good thing.

Mr. Johnson thinks that the adjacent possible explains why cities foster much more innovation than small towns: Cities abound with serendipitous connections. Industries, he says, may tend to cluster for the same reason. A lone company in the middle of nowhere has only the mental resources of its employees to fall back on. When there are hundreds of companies around, with workers more likely to change jobs, ideas can cross-fertilize.

The author outlines other factors that make innovation work: the tolerance of failure, as in Thomas Edison’s inexorable process-of-elimination approach to finding a workable light-bulb filament; the way that ideas from one field can be transformed in another; and the power of information platforms to connect disparate data and research. “Where Good Ideas Come From” is filled with fascinating, if sometimes tangential, anecdotes from the history of entrepreneurship and scientific discovery. The result is that the book often seems less a grand theory of innovation than a collection of stories and theories about creativity that Steven Johnson happens to find interesting.

It turns out that Mr. Johnson himself has a big idea, but it’s not a particularly incisive one: He proposes that competition and market forces are less important to innovation than openness and inspiration. The book includes a list of history’s most important innovations and divides them along two axes: whether the inventor was working alone or in a network; and whether he was working for a market reward or for some other reason. Market-led innovations, it turns out, are in the minority.

Certainly it is true that great discoveries happen in government projects or academic labs; it would be foolish to declare that only market incentives can produce transformative ideas. But Mr. Johnson’s list ultimately proves less about the market’s shortcomings than about the shortcomings of the great-discovery model of innovation on which he dwells. Markets may be less effective at delivering radical new ideas, but they excel at converting those ideas into useful tools.

Reverence for the great-discovery model of innovation is what prompts critics of the pharmaceutical industry to declare that all the “real work” of drug discovery is done in university labs, often with taxpayer funding. Drug companies, we are often told, simply steal the ideas and monetize them. And yet what “Big Pharma” does is no less crucial to drug discovery than the basic research that takes place in academia. It is not enough to learn that a certain disease process can be thwarted by a given molecule. You also have to figure out how to cheaply mass-produce that chemical, in a form that can be easily taken by ordinary patients (no IV drugs for acid reflux, please). And before the drug can be approved, it must be run through the expensive human trials required by the Food and Drug Administration.

The endless creativity of the human animal is one of the differences between us and a chimpanzee poking sticks into an anthill in search of a juicy meal. But another one is our capacity for the endless elaboration and refinement of ideas—particularly in a modern economy. Toyota’s prowess at this sort of incremental improvement is legendary, even radical. Wal-Mart, it is said, was responsible for 25% of U.S. productivity growth in the 1990s. That’s not because Sam Walton emerged from his lab one night waving blueprints for a magic productivity machine. The company made continual, often tiny, improvements in the management of its supply chain, opening thousands of stores along the way and putting the benefits within reach of virtually every American.

We are all of us, every day, discovering many things that don’t work very well and a few things that do. Reducing the history of innovation to a few “big ideas” misses the full power of human ingenuity.

Ms. McArdle is the business and economics editor of The Atlantic and a fellow at the New America Foundation.


Full article and photo:

Kant on a Kindle?

The technology of the book—sheafs of paper covered in squiggles of ink—has remained virtually unchanged since Gutenberg. This is largely a testament to the effectiveness of books as a means of transmitting and storing information. Paper is cheap, and ink endures.

In recent years, however, the act of reading has undergone a rapid transformation, as devices such as the Kindle and iPad account for a growing share of book sales. (Amazon, for instance, now sells more e-books than hardcovers.) Before long, we will do most of our reading on screens—lovely, luminous screens.

The displays are one of the main selling points of these new literary gadgets. Thanks to dramatic improvements in screen resolution, the words shimmer on the glass; every letter is precisely defined, with fully adjustable fonts. Think of it as a beautifully printed book that’s always available in perfect light. For contrast and clarity, it’s hard for Gutenberg to compete.

And these reading screens are bound to get better. One of the longstanding trends of modern technology is to make it easier and easier to perceive fine-grained content. The number of pixels in televisions has increased fivefold in the last 10 years, VHS gave rise to the Blu-Ray, and computer monitors can display millions of vibrant colors.

I would be the last to complain about such improvements—I shudder to imagine a world without sports on HDTV—but it’s worth considering the ways in which these new reading technologies may change the nature of reading and, ultimately, the content of our books.

Let’s begin by looking at how reading happens in the brain. Stanislas Dehaene, a neuroscientist at the Collège de France in Paris, has helped to demonstrate that the literate brain contains two distinct pathways for making sense of words, each activated in different contexts. One pathway, known as the ventral route, is direct and efficient: We see a group of letters, convert those letters into a word and then directly grasp the word’s meaning. When you’re reading a straightforward sentence in a clear format, you’re almost certainly relying on this neural highway. As a result, the act of reading seems effortless. We don’t have to think about the words on the page.

But the ventral route is not the only way to read. The brain’s second reading pathway, the dorsal stream, is turned on when we have to pay conscious attention to a sentence. Perhaps we’ve encountered an obscure word or a patch of smudged ink. (In his experiments, Mr. Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Mr. Dehaene’s research demonstrates that even adults are still forced to occasionally decipher a text.

The lesson of his research is that the act of reading observes a gradient of awareness. Familiar sentences rendered on lucid e-ink screens are read quickly and effortlessly. Unusual sentences with complex clauses and odd punctuation tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra cognitive work wakes us up; we read more slowly, but we notice more. Psychologists call this the “levels-of-processing” effect, since sentences that require extra levels of analysis are more likely to get remembered.

E-readers have yet to dramatically alter the reading experience; e-ink still feels a lot like old-fashioned ink. But it seems inevitable that the same trends that have transformed our televisions will also affect our reading gadgets. And this is where the problems begin. Do we really want reading to be as effortless as possible? The neuroscience of literacy suggests that, sometimes, the best way to make sense of a difficult text is to read it in a difficult format, to force our brain to slow down and process each word. After all, reading isn’t about ease—it’s about understanding. If we’re going to read Kant on the Kindle, or Proust on the iPad, then we should at least experiment with an ugly font.

Every medium eventually influences the message that it carries. I worry that, before long, we’ll become so used to the mindless clarity of e-ink that the technology will feed back onto the content, making us less willing to endure challenging texts. We’ll forget what it’s like to flex those dorsal muscles, to consciously decipher a thorny stretch of prose. And that would be a shame, because not every sentence should be easy to read.

Jonah Lehrer is the author, most recently, of ‘”How We Decide.”


Full article and photo:

Descent Into Legal Hell

On the afternoon of Sept. 28, 1999, sheriff’s deputies pulled into the driveway of Cynthia Stewart’s Ohio home and arrested her. Her crime: taking pictures of her 8-year-old daughter playing in the bathtub. She had sent the photos to a film-processing lab, and the lab called the police. The police took the pictures to the town prosecutor, who viewed them as harmless and declined to press charges. The police then turned to the county prosecutor, who was all too happy to take the case. He promptly brought child- pornography charges against Ms. Stewart.

“Framing Innocence” is Lynn Powell’s reported account of Ms. Stewart’s descent into legal hell. For two years, the case meandered through the justice system. Child Services filed suit, seeking custody of Ms. Stewart’s daughter on the grounds that the young girl had been abused. Ms. Stewart was threatened with 16 years in jail. Her legal bills ran upwards of $40,000. In the end an intense public campaign on her behalf forced the ambitious prosecutor (who is now a federal judge) to cut a deal in which Ms. Stewart was absolved of wrongdoing.

The case is not unique. By the time Ms. Stewart’s saga ended, another mother in Ohio and a grandmother in New Jersey had also been arrested on similarly absurd charges. The relevant case law, Osborne v. Ohio, gives such a loosely worded standard for child pornography that it allows the state nearly unfettered intrusion into family life. Like the Supreme Court’s eminent-domain decision (Kelo v. City of New London), Osborne has had the effect of unleashing the power of the state on unsuspecting individuals. If you have ever taken a picture of a naked toddler, the only thing standing between you and criminal prosecution is the good judgment of government workers.

The Stewart case is particularly notable in that it took place in Oberlin, Ohio, which is Middle America’s version of Berkeley, Calif. Nearly every character in the cast is liberal to the point of self-parody. Before her court appearance Ms. Stewart had never owned a bra. Her cat was named after a Sandinista spy. Her then-partner worked for the Nation magazine. (They have split up since.) And yet the liberals who rallied to Ms. Stewart’s defense are undiverted from their belief that government should have a great deal to say about how people live their lives.

“Framing Innocence” is thoroughly and fairly reported, without a strong polemical thrust. If there is a point to this morality tale, in Ms. Powell’s telling, it seems to be that in the justice system mistakes can be made—sometimes terrible mistakes. True enough, but more could be said. If a well-meaning law about child pornography can wreak such havoc on families, anyone care to speculate on what a 2,000-page health-care law might do?

Mr. Last is a senior reader at the Weekly Standard


Full article and photo:

Too Funny for Words

WHEN my dad, Allen Funt, produced “Candid Microphone” back in the mid-1940s, he used a clever ruse to titillate listeners. A few times per show he’d edit out an innocent word or phrase and replace it with a recording of a sultry woman’s voice saying, “Censored.” Audiences always laughed at the thought that something dirty had been said, even though it hadn’t.

When “Candid Camera” came to television, the female voice was replaced by a bleep and a graphic that flashed “Censored!” As my father and I learned over decades of production, ordinary folks don’t really curse much in routine conversation — even when mildly agitated — but audiences love to think otherwise.

By the mid-1950s, TV’s standards and practices people decided Dad’s gimmick was an unacceptable deception. There would be no further censoring of clean words.

I thought about all this when CBS started broadcasting a show last week titled “$#*! My Dad Says,” which the network insists with a wink should be pronounced “Bleep My Dad Says.” There is, of course, no mystery whatsoever about what the $-word stands for, because the show is based on a highly popular Twitter feed, using the real word, in which a clever guy named Justin Halpern quotes the humorous, often foul utterances of his father, Sam.

Bleeping is broadcasting’s biggest deal. Even on basic cable, the new generation of “reality” shows like “Jersey Shore” bleep like crazy, as do infotainment series like “The Daily Show With Jon Stewart,” where scripted curses take on an anti-establishment edge when bleeped in a contrived bit of post-production. This season there is even a cable series about relationships titled “Who the (Bleep) Did I Marry?” — in which “bleep” isn’t subbing for any word in particular. The comedian Drew Carey is developing a series that CBS has decided to call “WTF!” Still winking, the network says this one stands for “Wow That’s Funny!”

Although mainstream broadcasters won a battle against censorship over the summer when a federal appeals court struck down some elements of the Federal Communications Commission’s restrictions on objectionable language, they’ve always been more driven by self-censorship than by the government-mandated kind. Eager to help are advertisers and watchdog groups, each appearing to take a tough stand on language while actually reveling in the double entendre.

For example, my father and I didn’t run across many dirty words when recording everyday conversation, but we did find that people use the terms “God” and “Jesus” frequently — often in a gentle context, like “Oh, my God” — and this, it turned out, worried broadcasting executives even more than swearing. If someone said “Jesus” in a “Candid Camera” scene, CBS made us bleep it, leaving viewers to assume that a truly foul word had been spoken. And that seemed fine with CBS, because what mainstream TV likes best is the perception of naughtiness.

TV’s often-hypocritical approach to censorship was given its grandest showcase back in 1972, when the comedian George Carlin first took note of “Seven Words You Can Never Say on Television.” The bit was recreated on stage at the Kennedy Center a few years ago in a posthumous tribute to Carlin, but all the words were bleeped — not only for the PBS audience but for the theatergoers as well.

Many who saw the show believed the bleeped version played funnier. After all, when Bill Maher and his guests unleash a stream of nasty words on HBO, it’s little more than barroom banter. But when Jon Stewart says the same words, knowing they’ll be bleeped, it revs up the crowd while also seeming to challenge the censors.

In its July ruling, the appeals court concluded, “By prohibiting all ‘patently offensive’ references to sex … without giving adequate guidance as to what ‘patently offensive’ means, the F.C.C. effectively chills speech, because broadcasters have no way of knowing what the F.C.C. will find offensive.” That’s quite reasonable — and totally beside the point. Most producers understand that when it comes to language, the sizzle has far more appeal than the steak. Broadcasters keep jousting with the F.C.C. begging not to be thrown in the briar patch of censorship, because that’s really where they most want to be.

Jimmy Kimmel has come up with a segment for his late-night ABC program called “This Week in Unnecessary Censorship.” He bleeps ordinary words in clips to make them seem obscene. How bleepin’ dare he! Censorship, it seems, remains one of the most entertaining things on television.

Peter Funt writes about social issues on his Web site, Candid Camera.


Full article and photo:

The Genius of the Tinkerer

The secret to innovation is combining odds and ends, writes Steven Johnson.

In the year following the 2004 tsunami, the Indonesian city of Meulaboh received eight neonatal incubators from international relief organizations. Several years later, when an MIT fellow named Timothy Prestero visited the local hospital, all eight were out of order, the victim of power surges and tropical humidity, along with the hospital staff’s inability to read the English repair manual.

Nerdbots are assembled from found objects. Like ideas, they’re random pieces connected to create something new.

Mr. Prestero and the organization he cofounded, Design That Matters, had been working for several years on a more reliable, and less expensive, incubator for the developing world. In 2008, they introduced a prototype called the NeoNurture. It looked like a streamlined modern incubator, but its guts were automotive. Sealed-beam headlights supplied the crucial warmth; dashboard fans provided filtered air circulation; door chimes sounded alarms. You could power the device with an adapted cigarette lighter or a standard-issue motorcycle battery. Building the NeoNurture out of car parts was doubly efficient, because it tapped both the local supply of parts and the local knowledge of automobile repair. You didn’t have to be a trained medical technician to fix the NeoNurture; you just needed to know how to replace a broken headlight.

The NeoNurture incubator is a fitting metaphor for the way that good ideas usually come into the world. They are, inevitably, constrained by the parts and skills that surround them. We have a natural tendency to romanticize breakthrough innovations, imagining momentous ideas transcending their surroundings, a gifted mind somehow seeing over the detritus of old ideas and ossified tradition.

But ideas are works of bricolage. They are, almost inevitably, networks of other ideas. We take the ideas we’ve inherited or stumbled across, and we jigger them together into some new shape. We like to think of our ideas as a $40,000 incubator, shipped direct from the factory, but in reality they’ve been cobbled together with spare parts that happened to be sitting in the garage.

As a tribute to human ingenuity, the evolutionary biologist Stephen Jay Gould maintained an odd collection of sandals made from recycled automobile tires, purchased during his travels through the developing world. But he also saw them as a metaphor for the patterns of innovation in the biological world. Nature’s innovations, too, rely on spare parts.

Evolution advances by taking available resources and cobbling them together to create new uses. The evolutionary theorist Francois Jacob captured this in his concept of evolution as a “tinkerer,” not an engineer; our bodies are also works of bricolage, old parts strung together to form something radically new. “The tires-to-sandals principle works at all scales and times,” Mr. Gould wrote, “permitting odd and unpredictable initiatives at any moment—to make nature as inventive as the cleverest person who ever pondered the potential of a junkyard in Nairobi.”

You can see this process at work in the primordial innovation of life itself. Before life emerged on Earth, the planet was dominated by a handful of basic molecules: ammonia, methane, water, carbon dioxide, a smattering of amino acids and other simple organic compounds. Each of these molecules was capable of a finite series of transformations and exchanges with other molecules in the primordial soup: methane and oxygen recombining to form formaldehyde and water, for instance.

Think of all those initial molecules, and then imagine all the potential new combinations that they could form spontaneously, simply by colliding with each other (or perhaps prodded along by the extra energy of a propitious lightning strike). If you could play God and trigger all those combinations, you would end up with most of the building blocks of life: the proteins that form the boundaries of cells; sugar molecules crucial to the nucleic acids of our DNA. But you would not be able to trigger chemical reactions that would build a mosquito, or a sunflower, or a human brain. Formaldehyde is a first-order combination: You can create it directly from the molecules in the primordial soup. Creating a sunflower, however, relies on a whole series of subsequent innovations: chloroplasts to capture the sun’s energy, vascular tissues to circulate resources through the plant, DNA molecules to pass on instructions to the next generation.



Big new ideas more often result from recycling and combining old ideas than from eureka moments. Consider the origins of some familiar innovations.

Double-entry accounting

One of the essential instruments of modern capitalism appears to have been developed collectively in Renaissance Italy. Now the cornerstone of bookkeeping, double-entry’s innovation of recording every financial event in two ledgers (one for debit, one for credit) allowed merchants to accurately track the health of their businesses. It was first codified by the Franciscan friar Luca Pacioli in 1494, but it had been used for at least two centuries by Italian bankers and merchants.

Gutenberg press

The printing press is a classic combinatorial innovation. Each of its key elements—the movable type, the ink, the paper and the press itself—had been developed separately well before Johannes Gutenberg printed his first Bible in the 15th century. Movable type, for instance, had been independently conceived by a Chinese blacksmith named Pi Sheng four centuries earlier. The press itself was adapted from a screw press that was being used in Germany for the mass production of wine.

Air conditioning

AC counts as a rare instance of innovation through sheer individual insight. After summer heat waves in 1900 and 1901, the owners of a printing company asked the heating-systems specialist Buffalo Forge Co. for a way to make the air in its press rooms less humid. The project fell to a 25-year-old electrical engineer named Willis Carrier, who built a system that cooled the air to a temperature that would produce 55% humidity. His idea ultimately rearranged the social and political map of America.


The scientist Stuart Kauffman has a suggestive name for the set of all those first-order combinations: “the adjacent possible.” The phrase captures both the limits and the creative potential of change and innovation. In the case of prebiotic chemistry, the adjacent possible defines all those molecular reactions that were directly achievable in the primordial soup. Sunflowers and mosquitoes and brains exist outside that circle of possibility. The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.

The strange and beautiful truth about the adjacent possible is that its boundaries grow as you explore them. Each new combination opens up the possibility of other new combinations. Think of it as a house that magically expands with each door you open. You begin in a room with four doors, each leading to a new room that you haven’t visited yet. Once you open one of those doors and stroll into that room, three new doors appear, each leading to a brand-new room that you couldn’t have reached from your original starting point. Keep opening new doors and eventually you’ll have built a palace.

Basic fatty acids will naturally self-organize into spheres lined with a dual layer of molecules, very similar to the membranes that define the boundaries of modern cells. Once the fatty acids combine to form those bounded spheres, a new wing of the adjacent possible opens up, because those molecules implicitly create a fundamental division between the inside and outside of the sphere. This division is the very essence of a cell. Once you have an “inside,” you can put things there: food, organelles, genetic code.

The march of cultural innovation follows the same combinatorial pattern: Johannes Gutenberg, for instance, took the older technology of the screw press, designed originally for making wine, and reconfigured it with metal type to invent the printing press.

More recently, a graduate student named Brent Constantz, working on a Ph.D. that explored the techniques that coral polyps use to build amazingly durable reefs, realized that those same techniques could be harnessed to heal human bones. Several IPOs later, the cements that Mr. Constantz created are employed in most orthopedic operating rooms throughout the U.S. and Europe.

Mr. Constantz’s cements point to something particularly inspiring in Mr. Kauffman’s notion of the adjacent possible: the continuum between natural and man-made systems. Four billion years ago, if you were a carbon atom, there were a few hundred molecular configurations you could stumble into. Today that same carbon atom can help build a sperm whale or a giant redwood or an H1N1 virus, along with every single object on the planet made of plastic.

The premise that innovation prospers when ideas can serendipitously connect and recombine with other ideas may seem logical enough, but the strange fact is that a great deal of the past two centuries of legal and folk wisdom about innovation has pursued the exact opposite argument, building walls between ideas. Ironically, those walls have been erected with the explicit aim of encouraging innovation. They go by many names: intellectual property, trade secrets, proprietary technology, top-secret R&D labs. But they share a founding assumption: that in the long run, innovation will increase if you put restrictions on the spread of new ideas, because those restrictions will allow the creators to collect large financial rewards from their inventions. And those rewards will then attract other innovators to follow in their path.

Circa 1450, Johannes Gutenberg (1400 – 1468) inventor of printing examines a page from his first printing press.

The problem with these closed environments is that they make it more difficult to explore the adjacent possible, because they reduce the overall network of minds that can potentially engage with a problem, and they reduce the unplanned collisions between ideas originating in different fields. This is why a growing number of large organizations—businesses, nonprofits, schools, government agencies—have begun experimenting with more open models of idea exchange.

Organizations like IBM and Procter & Gamble, who have a long history of profiting from patented, closed-door innovations, have embraced open innovation platforms over the past decade, sharing their leading-edge research with universities, partners, suppliers and customers. Modeled on the success of services like Twitter and Flickr, new Web startups now routinely make their software accessible to programmers who are not on their payroll, allowing these outsiders to expand on and remix the core product in surprising new ways.

Earlier this year, Nike announced a new Web-based marketplace it calls the GreenXchange, where it has publicly released more than 400 of its patents that involve environmentally friendly materials or technologies. The marketplace is a kind of hybrid of commercial self-interest and civic good. This makes it possible for outside firms to improve on those innovations, creating new value that Nike might ultimately be able to put to use itself in its own products.

In a sense, Nike is widening the network of minds who are actively thinking about how to make its ideas more useful, without adding any more employees. But some of its innovations might well turn out to be advantageous to industries or markets in which it has no competitive involvement whatsoever. By keeping its eco-friendly ideas behind a veil of secrecy, Nike was holding back ideas that might, in another context, contribute to a sustainable future—without any real commercial justification.

A hypothetical scenario invoked by the company at the launch of the GreenXchange would have warmed the heart of Stephen Jay Gould: perhaps an environmentally-sound rubber originally invented for use in running shoes could be adapted by a mountain bike company to create more sustainable tires. Apparently, Gould’s tires-to-sandals principle works both ways. Sometimes you make footwear by putting tires to new use, and sometimes you make tires by putting footwear to new use.

There is a famous moment in the story of the near-catastrophic Apollo 13 mission—wonderfully captured in the Ron Howard film—in which the mission control engineers realize they need to create an improvised carbon dioxide filter, or the astronauts will poison the lunar module atmosphere with their own exhalations before they return to Earth. The astronauts have plenty of carbon “scrubbers” onboard, but these filters were designed for the original, damaged spacecraft and don’t fit the ventilation system of the lunar module they are using as a lifeboat to return home. Mission control quickly assembles a “tiger team” of engineers to hack their way through the problem.

In the movie, Deke Slayton, head of flight crew operations, tosses a jumbled pile of gear on a conference table: hoses, canisters, stowage bags, duct tape and other assorted gadgets. He holds up the carbon scrubbers. “We gotta find a way to make this fit into a hole for this,” he says, and then points to the spare parts on the table, “using nothing but that.”

The space gear on the table defines the adjacent possible for the problem of building a working carbon scrubber on a lunar module. (The device they eventually concocted, dubbed the “mailbox,” performed beautifully.) The canisters and nozzles are like the ammonia and methane molecules of the early Earth, or those Toyota parts heating an incubator: They are the building blocks that create—and limit—the space of possibility for a specific problem. The trick to having good ideas is not to sit around in glorious isolation and try to think big thoughts. The trick is to get more parts on the table.

Steven Johnson is the author of seven books, including “The Invention of Air.” This essay is adapted from “Where Good Ideas Come From: The Natural History of Innovation.”


Full article and photo:

Lost libraries

The strange afterlife of authors’ book collections

A few weeks ago, Annecy Liddell was flipping through a used copy of Don DeLillo’s ”White Noise” when she saw that the previous owner had written his name inside the cover: David Markson. Liddell bought the novel anyway and, when she got home, looked the name up on Wikipedia.

Markson, she discovered, was an important novelist himself–an experimental writer with a cult following in the literary world. David Foster Wallace considered Markson’s ”Wittgenstein’s Mistress”–a novel that had been rejected by 54 publishers–”pretty much the high point of experimental fiction in this country.” When it turned out that Markson had written notes throughout Liddell’s copy of ”White Noise,” she posted a Facebook update about her find. ”i wanted to call him up and tell him his notes are funny, but then i realized he DIED A MONTH AGO. bummer.”

The news of Liddell’s discovery quickly spread through Facebook and Twitter’s literary districts, and Markson’s fans realized that his personal library, about 2,500 books in all, had been sold off and was now anonymously scattered throughout The Strand, the vast Manhattan bookstore where Liddell had bought her book. And that’s when something remarkable happened: Markson’s fans began trying to reassemble his books. They used the Internet to coordinate trips to The Strand, to compile a list of their purchases, to swap scanned images of his notes, and to share tips. (The easiest way to spot a Markson book, they found, was to look for the high-quality hardcovers.) Markson’s fans told stories about watching strangers buy his books without understanding their origin, even after Strand clerks pointed out Markson’s signature. They also started asking questions, each one a variation on this: How could the books of one of this generation’s most interesting novelists end up on a bookstore’s dollar clearance carts?

What Markson’s fans had stumbled on was the strange and disorienting world of authors’ personal libraries. Most people might imagine that authors’ libraries matter–that scholars and readers should care what books authors read, what they thought about them, what they scribbled in the margins. But far more libraries get dispersed than saved. In fact, David Markson can now take his place in a long and distinguished line of writers whose personal libraries were quickly, casually broken down. Herman Melville’s books? One bookstore bought an assortment for $120, then scrapped the theological titles for paper. Stephen Crane’s? His widow died a brothel madam, and her estate (and his books) were auctioned off on the steps of a Florida courthouse. Ernest Hemingway’s? To this day, all 9,000 titles remain trapped in his Cuban villa.

The issues at stake when libraries vanish are bigger than any one author and his books. An author’s library offers unique access to a mind at work, and their treatment provides a look at what exactly the literary world decides to value in an author’s life. John Wronoski, a longtime book dealer in Cambridge, has seen the libraries of many prestigious authors pass through his store without securing a permanent home. ”Most readers would see these names and think, ’My god, shouldn’t they be in a library?’” Wronoski says. ”But most readers have no idea how this system works.”

The literary world is full of treasures and talismans, not all of them especially literary–a lock of Byron’s hair has been sold at auction; Harvard has archived John Updike’s golf score cards.

For private collectors and university libraries, though, the most important targets are manuscripts and letters and research materials–what’s collectively known as an author’s papers–and rare, individually valuable books. In the first category, especially, things can get expensive. The University of Texas’s Harry Ransom Center recently bought Bob Woodward and Carl Bernstein’s papers for $5 million and Norman Mailer’s for $2.5 million. Compared to the papers, the author’s own library takes a back seat. ”An author’s books are important,” says Tom Staley, the Ransom Center’s director, ”but they’re no substitute for the manuscripts and the correspondence. The books are gravy.”

Updike would seem to have agreed. After his death in 2009, Harvard’s Houghton Library bought Updike’s archive, more than 125 shelves of material that he assembled himself. Updike chose to include 1,500 books, but that number is inflated by his own work–at least one copy of every edition of every book in every language it was issued. ”He was not so comprehensive in the books that he read,” says Leslie Morris, Harvard’s curator for the Updike archive. In fact, Updike was known to donate old books to church book sales and to hand them out to friends’ wives. Late in life, he made a deal with Mark Stolle, who owns a bookstore in Manchester-by-the-Sea. ”He would call me once his garage was filled,” Stolle remembers, ”and I would go over and buy them.”

While he didn’t seem to value them, Updike’s books begin to show how and why an author’s library does matter. In his copy of Tom Wolfe’s ”A Man in Full,” which was one of Stolle’s garage finds, Updike wrote comments like ”adjectival monotony” and ”semi cliché in every sentence.” A comparison with Updike’s eventual New Yorker review suggests that authors will write things in their books that they won’t say in public.

An author’s library, like anyone else’s, reveals something about its owner. Mark Twain loved to present himself as self-taught and under-read, but his carefully annotated books tell a different story. Books can offer hints about an author’s social and personal life. After David Foster Wallace’s death in 2008, the Ransom Center bought his papers and 200 of his books, including two David Markson novels that Wallace not only annotated, but also had Markson sign when they met in New York in 1990. Most of all, though, authors’ libraries serve as a kind of intellectual biography. Melville’s most heavily annotated book was an edition of John Milton’s poems, and it proves he reread ”Paradise Lost” while struggling with ”Moby-Dick.”

And yet these libraries rarely survive intact. The reasons for this can range from money problems to squabbling heirs to poorly executed auctions. Twain’s library makes for an especially cringe-worthy case study because, unlike a lot of now-classic authors, he saw no ebb in his reputation–and, thus, no excuse in the handling of his books. In 1908, Twain donated 500 books to the library he helped establish in Redding, Conn. After Twain’s death in 1910, his daughter, Clara, gave the library another 1,700 books. The Redding library began circulating Twain’s books, many of which contained his notes, and souvenir hunters began cutting out every page that had Twain’s handwriting. This was bad enough, but in the 1950s the library decided to thin its inventory, unloading the unwanted books on a book dealer who soon realized he now possessed more than 60 titles annotated by Mark Twain. Today, academic libraries across the country own Twain books in which ”REDDING LIBRARY” has been stamped in purple ink.

But the 1950s also marked the start of a shift in the way many scholars and librarians appraised an author’s books. They began trying to reassemble the most famous authors’ libraries–or, in worst-case scenarios like Twain’s, to compile detailed lists of every book a writer had owned. The effort and ingenuity behind these lists can be astounding, as scholars will sift through diaries, receipts, even old library call slips. A good example is Alan Gribben’s ”Mark Twain’s Library: A Reconstruction,” which runs to two volumes and took nine years to complete.

This raises an obvious question: Why not make the list of an author’s books before dispersing them? The answer, usually, is time. Book dealers, Wronoski says, can’t assemble scholarly lists while also moving enough inventory to stay in business. When Wallace’s widow and his literary agent, Bonnie Nadell, sorted through his library, they sent only the books he had annotated to the Ransom Center. The others, more than 30 boxes’ worth, they donated to charity. There was no chance to make a list, Nadell says, because another professor needed to move into Wallace’s office. ”We were just speed skimming for markings of any kind.”

Still, the gap between the labor required on the front end and the back end can make such choices seem baffling and even–a curious charge to make when discussing archives–short-sighted. Libraries, for their part, must also allocate limited resources, and they do so based on a calculus of demand, precedent, and prestige. This means the big winners are historical authors (in the 1980s, Melville’s copy of Milton sold at an auction for $100,000) and those who fit into a library’s targeted specialties. ”We tend to focus on Harvard-educated authors,” Morris says. ”The Houghton Library is pretty much full and has been for the last 10 years.”

In David Markson’s case, the easiest explanation for why his books ended up at The Strand is that he wanted them to. Markson, who lived near the bookstore, would stop by three or four times a week. The Strand, in turn, hosted his book signings and maintained a table of his books, and Markson’s daughter, Johanna, says he frequently told her in his final years to take his books to The Strand. ”He said they’d take good care of us,” she says.

And so, after Johanna and her brother saved some books that were important to them–”I want my children to see what kind of reader their grandfather was,” Johanna says–a truck from The Strand picked up the rest, 63 boxes in all. Fred Bass, The Strand’s owner, says he had to break Markson’s library apart because of the size of his operation. ”We do it with most personal libraries,” Bass says. ”We don’t have room to set up special collections.”

Markson had sold books to The Strand before. In fact, over the years, he sold off his most valuable books and even small batches of his literary correspondence simply to make ends meet. Markson recalled in one interview that, when he asked Jack Kerouac to sign a book for him, Kerouac was so drunk he stabbed the pen through the front page. Bass said he personally looked through Markson’s books hoping to find items like this. ”But David had picked it pretty clean.”

Selling his literary past became a way for Markson to sustain his literary future. In ”Wittgenstein’s Mistress” and the four novels that followed, Markson abandoned characters and plots in favor of meticulously ordered allusions and historical anecdotes–a style he called ”seminonfictional semifiction.” That style, along with the skill with which he prosecuted it, explains both the size and the passion of Markson’s audience.

Markson’s late style also explains the special relevance of his library, and it’s a wonderful twist that these elements all came together in the campaign to crowdsource it. Through a Facebook group and an informal collection of blog posts, Markson’s fans have put together a representative sample of his books. The results won’t satisfy the scholarly completist, but they reveal the range of Markson’s reading–not just fiction and poetry, but classical literature, philosophy, literary criticism, and art history. They also illuminate aspects of Markson’s life (one fan got the textbooks Markson used while a graduate student) and his art (another got his copy of ”Foxe’s Book of Martyrs,” where Markson had underlined passages that resurface in his later novels). Most of all, they capture Markson’s mind as it plays across the page. In his copy of ”Agape Agape,” the final novel from postmodern wizard William Gaddis, Markson wrote: ”Monotonous. Tedious. Repetitious. One note, all the way through. Theme inordinately stale + old hat. Alas, Willie.”

Markson’s letters to and from Gaddis were one of the things he sold off–they’re now in the Gaddis collection at Washington University–but Johanna Markson says he left some papers behind. ”He always told us, ’When I die, that’s when I’ll be famous,’” she says, and she’s saving eight large bins full of Markson’s edited manuscripts, the note cards he used to write his late novels, and his remaining correspondence. A library like Ohio State’s, which specializes in contemporary fiction, seems like a good match. In fact, Geoffrey Smith, head of Ohio State’s Rare Books and Manuscripts Library, says he would have liked to look at Markson’s library, in addition to his papers. ”We would have been interested, to say the least,” Smith says.

But if Markson’s library–and a potential scholarly foothold–has been lost, other things have been gained. A dead man’s wishes have been honored. A few fans have been blessed. And an author has found a new reader. ”I’m glad I got that book,” Annecy Liddell says. ”I really wouldn’t know who Markson is if I hadn’t found that. I haven’t finished ‘White Noise’ yet but I’m almost done with ‘Wittgenstein’s Mistress’–it’s weird and great and way more fun to read.”

By Craig Fehrman is working on a book about presidents and their books.


Full article and photo:

The me-sized universe

Some parts of the cosmos are right within our grasp

If you happen to think about the universe during the course of your day, you will likely be overwhelmed.

The universe seems vast, distant, and unknowable. It is, for example, unimaginably large and old: The number of stars in our galaxy alone exceeds 100 billion, and the Earth is 4.5 billion years old. In the eyes of the universe, we’re nothing. We humans are tiny and brief. And much of the physics that drives the universe occurs on the other end of the scale, almost inconceivably small and fast. Chemical changes can occur faster than the blink of an eye, and atoms make the head of a pin seem like a mountain (really more like three Mount Everests).

Clearly, our brains are not built to handle numbers on this astronomical scale. While we are certainly a part of the cosmos, we are unable to grasp its physical truths. To call a number astronomical is to say that it is extreme, but also, in some sense, unknowable. We may recognize our relative insignificance, but leave dwelling on it to those equipped with scientific notation.

However, there actually are properties of the cosmos that can be expressed at the scale of the everyday. We can hold the tail of this beast of a universe–even if only for a moment. Shall we try?

Let’s begin at the human scale of time: It turns out that there is one supernova, a cataclysmic explosion of a star that marks the end of its life, about every 50 years in the Milky Way. The frequency of these stellar explosions fully fits within the life span of a single person, and not even a particularly long-lived one. So throughout human history, each person has likely been around for one or two of these bursts that can briefly burn brighter than an entire galaxy.

On the other hand, while new stars are formed in our galaxy at a faster rate, it is still nice and manageable, with about seven new stars in the Milky Way each year. So, over the course of an average American lifetime, each of us will have gone about our business while nearly 550 new stars were born.

But stars are always incomprehensibly large, right? Well, not always. Sometimes, near the end of a star’s life, it doesn’t explode. Instead, it collapses in on itself. Some of these are massive enough to become black holes, where space and time become all loopy. But just short of that, some stars collapse and become massive objects known as neutron stars. While these stars have incredible gravitational fields and can be detected from distances very far away, they are actually not very large. They are often only about 12 miles in diameter, which is about the distance from MIT to Wellesley College. While its mass is 500,000 times the mass of the Earth, a neutron star is actually very easy to picture, at least in terms of size.

Moving to the other end of the size spectrum, hydrogen atoms are unbelievably small: You would need to line up over 10 billion of them in a row to reach the average adult arm span. However, the wavelength of the energy a neutral hydrogen atom releases is right in our comfort zone: about 21 centimeters (or 8 inches). This is only about one-eighth the average height of a human being. This fact was even encoded pictorially on the plaques on the Pioneer probes, in order to show human height to any extraterrestrials that might eventually find these probes now hurtling out of the solar system, and who might be interested in how big we are.

And let’s not forget energy, though it might seem hard to find energetic examples on the human scale. For example, the sun, a fairly unimpressive star, releases over 300 yottajoules of energy each second, where yotta- is the highest prefix created in the metric system and is a one followed by 24 zeroes. Nonetheless, there are energy quantities we can handle. The most energetic cosmic rays–highly energetic particles of mysterious origin that come from somewhere deep in space–have about the same amount of energy as a pitcher throwing a baseball at 60 miles per hour. This is the low end of the speed of a knuckleball, which is one of the slowest pitches in baseball. While the fact that a tiny subatomic particle has that much energy is truly astounding, it’s no Josh Beckett fastball.

While these examples might seem few and far between, there is good news: The universe is actually becoming less impersonal. Through science and technology, we are getting better at bringing cosmic quantities to the human scale. For example, the number of stars in our Milky Way galaxy is less than half the total number of bits that can be stored on a Blu-ray disc. The everyday is slowly but surely inching towards the cosmic.

Yes, the universe is big and we are small. But we must treasure the exceptions, and see a little bit of the human in the cosmic, even if only for a moment.

Samuel Arbesman is a postdoctoral fellow in the Department of Health Care Policy at Harvard Medical School and is affiliated with the Institute for Quantitative Social Science at Harvard University. He is a regular contributor to Ideas.


Full article:

Boxing Lessons

I offer training in both philosophy and boxing. Over the years, some of my colleagues have groused that my work is a contradiction, building minds and cultivating rational discourse while teaching violence and helping to remove brain cells. Truth be told, I think philosophers with this gripe should give some thought to what really counts as violence.  I would rather take a punch in the nose any day than be subjected to some of the attacks that I have witnessed in philosophy colloquia.  However, I have a more positive case for including boxing in my curriculum for sentimental education. 

Western philosophy, even before Descartes’ influential case for a mind-body dualism, has been dismissive of the body. Plato — even though he competed as a wrestler — and most of the sages who followed him, taught us to think of our arms and legs as nothing but a poor carriage for the mind.  In “Phaedo,” Plato presents his teacher Socrates on his deathbed as a sort of Mr. Spock yearning to be free from the shackles of the flesh so he can really begin thinking seriously. In this account, the body gives rise to desires that will not listen to reason and that becloud our ability to think clearly.
In much of Eastern philosophy, in contrast, the search for wisdom is more holistic. The body is considered inseparable from the mind, and is regarded as a vehicle, rather than an impediment, to enlightenment. The unmindful attitude towards the body so prevalent in the West blinkers us to profound truths that the skin, muscles and breath can deliver like a punch.

While different physical practices may open us to different truths, there is a lot of wisdom to be gained in the ring. Socrates, of course, maintained that the unexamined life was not worth living, that self-knowledge is of supreme importance. One thing is certain: boxing can compel a person to take a quick self-inventory and gut check about what he or she is willing to endure and risk. As Joyce Carol Oates observes in her minor classic, “On Boxing”:

Boxers are there to establish an absolute experience, a public accounting of the outermost limits of their beings; they will know, as few of us can know of ourselves, what physical and psychic power they possess — of how much, or how little, they are capable.

Though the German idealist philosopher G.W.F. Hegel (1770-1831) never slipped on the gloves, I think he would have at least supported the study of the sweet science. In his famous Lord and Bondsman allegory,[1] Hegel suggests that it is in mortal combat with the other, and ultimately in our willingness to give up our lives, that we rise to a higher level of freedom and consciousness. If Hegel is correct, the lofty image that the warrior holds in our society has something to do with the fact that in her willingness to sacrifice her own life, she has escaped the otherwise universal choke hold of death anxiety. Boxing can be seen as a stylized version of Hegel’s proverbial trial by battle and as such affords new possibilities of freedom and selfhood.

Viewed purely psychologically, practice in what used to be termed the “manly art” makes people feel more at home in themselves, and so less defensive and perhaps less aggressive. The way we cope with the elemental feelings of anger and fear determines to no small extent what kind of person we will become. Enlisting Aristotle, I shall have more to say about fear in a moment, but I don’t think it takes a Freud to recognize that many people are mired in their own bottled up anger. In our society, expressions of anger are more taboo than libidinal impulses. Yet, as our entertainment industry so powerfully bears out, there is plenty of fury to go around. I have trained boxers, often women, who find it extremely liberating to learn that they can strike out, throw a punch, express some rage, and that no one is going to die as a result.

And let’s be clear, life is filled with blows. It requires toughness and resiliency. There are few better places than the squared circle to receive concentrated lessons in the dire need to be able to absorb punishment and carry on, “to get off the canvas” and “roll with the punches.” It is little wonder that boxing, more than any other sport, has functioned as a metaphor for life. Aside from the possibilities for self-fulfillment, boxing can also contribute to our moral lives.

In his “Nicomachean Ethics,” Aristotle argues that the final end for human beings is eudaimonia ─ the good life, or as it is most often translated, happiness. In an immortal sentence Aristotle announces, “The Good of man (eudaimonia) is the active exercise of his soul’s faculties in conformity with excellence or virtue, or if there be several human excellences or virtues, in conformity with the best and most perfect among them.”[2]

A few pages later, Aristotle acknowledges that there are in fact two kinds of virtue or excellence, namely, intellectual and moral.[3] Intellectual excellence is simple book learning, or theoretical smarts. Unlike his teacher Plato and his teacher’s teacher, Socrates, Aristotle recognized that a person could know a great deal about the Good and not lead a good life. “With regard to excellence,” says Aristotle, “it is not enough to know, but we must try to have and use it.” [4]

Aristotle offers a table of the moral virtues that includes, among other qualities, temperance, justice, pride, friendliness and truthfulness. Each semester when I teach ethics, I press my students to generate their own list of the moral virtues. “What,” I ask, “are the traits that you connect with having character?”  Tolerance, kindness, self-respect, creativity, always make it on to the board, but it is usually only with prodding that courage gets a nod. And yet, courage seems absolutely essential to leading a moral life. After all, if you do not have mettle, you will not be able to abide by your moral judgments.  Doing the right thing often demands going down the wrong side of the road of our immediate and long-range self-interests. It frequently involves sacrifices that we do not much care for, sometimes of friendships, or jobs; sometimes, as in the case with Socrates, even of our lives. Making these sacrifices is impossible without courage.

According to Aristotle, courage is a mean between rashness and cowardliness;[5] that is, between having too little trepidation and too much. Aristotle reckoned that in order to be able to hit the mean, we need practice in dealing with the emotions and choices corresponding to that virtue.  So far as developing grit is concerned, it helps to get some swings at dealing with manageable doses of fear. And yet, even in our approach to education, many of us tend to think of anything that causes a shiver as traumatic.  Consider, for example, the demise of dodge ball in public schools. It was banned because of the terror that the flying red balls caused in some children and of the damage to self-esteem that might come with always being the first one knocked out of the game. But how are we supposed to learn to stand up to our fears if we never have any supervised practice in dealing with the jitters? Of course, our young people are very familiar with aggressive and often gruesome video games that simulate physical harm and self-defense, but without, of course, any of the consequences and risks that might come with putting on the gloves.

Boxing provides practice with fear and with the right, attentive supervision, in quite manageable increments. In their first sparring session, boxers usually erupt in “fight or flight” mode. When the bell rings, novices forget everything they have learned and simply flail away.  If they stick with it for a few months, their fears diminish; they can begin to see things in the ring that their emotions blinded them to before. More importantly, they become more at home with feeling afraid. Fear is painful, but it can be faced, and in time a boxer learns not to panic about the blows that will be coming his way.

While Aristotle is able to define courage, the study and practice of boxing can enable us to not only comprehend courage, but “to have and use” it. By getting into the ring with our fears, we will be less likely to succumb to trepidation when doing the right thing demands taking a hit. To be sure, there is an important difference between physical and moral courage. After all, the world has seen many a brave monster. The willingness to endure physical risks is not enough to guarantee uprightness; nevertheless, it can, I think contribute in powerful ways to the development of moral virtue.


[1] G.W.F. Hegel, “Phenomenology of Spirit,” Chapter 4.
[2] Aristotle, “Nicomachean Ethics,” Book I, Chapter 7.
[3] ibid., Book I, Chapter 13.
[4] ibid, Book X, Chapter 9.
[5] ibid, Book III, Chapter 7.

Gordon Marino is an active boxing trainer and professor of philosophy at St. Olaf College. He covers boxing for the Wall Street Journal, is the editor of “Ethics:The Essential Writings” (Modern Library Classics, 2010) and is at work on a book about boxing and philosophy.


Full article and photo :

State Security, Post-Soviet Style

Closing down independent political life, branding critics as ‘extremists.’

In Soviet days, every corner of the KGB was under the tight control of the Communist Party. In Vladimir Putin’s Russia, the FSB—the KGB’s main successor—is largely unsupervised by anyone. Mr. Putin, briefly the FSB’s boss in the late 1990s, gave the secret-police agency free rein after taking over as Russia’s president from the ailing Boris Yeltsin in 2000. The FSB’s license has continued under the Putin-steered presidency of Dmitry Medvedev. The agency’s autonomy has been a catastrophe for Russia and should be a source of grave concern for the West.

Mr. Yeltsin encouraged competition between Russia’s spooks, but—as Andrei Soldatov and Irina Borogan make clear in “The New Nobility,” a disturbing portrait of the agency—Mr. Putin has given the FSB (from its Russian acronym Federalnaya Sluzhba Bezopasnosti, or Federal Security Service) a near monopoly. Originally just a domestic security service, it has become a sprawling empire, with capabilities ranging from electronic intelligence-gathering to control of Russia’s borders and operations beyond them. “According to even cautious estimates, FSB personnel total more than 200,000,” the authors write. The FSB’s instincts are xenophobic and authoritarian, its practices predatory and incompetent.

Critics of Russia see the FSB as the epitome of the country’s lawlessness and corruption. But those inside the agency see themselves as the ultimate guardians of Russia’s national security, thoroughly deserving of the rich rewards they reap. Nikolai Patrushev, who succeeded Mr. Putin as the agency’s director in 2000 and who is now secretary of Russia’s Security Council, calls his FSB colleagues a “new nobility.” Mr. Soldatov and Ms. Borogan see a different parallel: They liken the FSB to the ruthless Mukhabarat, or religious police, found in Saudi Arabia and other Arab countries: impenetrable, corrupt and ruthless.

Few people are better placed than Mr. Soldatov and Ms. Borogan to write with authority on this subject. They run the website Agentura.Ru, a magpie’s nest of news and analysis that presents a well-informed view of the inner workings of this secret state. Given the fates that have befallen other investigative journalists in Russia in recent years, some might fear for the authors’ safety. But the publication of the “The New Nobility” in English is welcome; it should be essential reading for those who hold naïve hopes about Russia’s development or who pooh-pooh the fears of its neighbors.

The book provides a detailed history of the FSB’s ascendancy over the past decade. It describes how Mr. Putin turned to the agency to consolidate his power. (The authors do not share the notion, held by some Russia-watchers, that it was the FSB—in those days a demoralized and chaotic outfit—that actually put Mr. Putin into the top job.) We’re told that Mr. Putin gave the agency a seat at Russia’s “head table,” but “trough,” rather than table, might be more accurate.

The authors recount how the Russian government has made outright land grants in much sought-after areas to high-ranking FSB officials, who then build gaudy mansions down the road from their oligarch neighbors. “Whether in the form of valuable land, luxury cars, or merit awards, the perks afforded FSB employees (especially those in particularly good standing) offer significant means of personal advancement. Russia’s new security services are more than simply servants of the state—they are landed property owners and powerful players.”

Mr. Soldatov and Ms. Borogan also present a chilling account of how the FSB, along with the prosecutor’s office and the interior ministry, has closed down independent political life in Russia, intimidating bloggers and trade unionists, infiltrating and disrupting opposition parties, and tarring all critics of the regime as “extremists.”

The authors give skimpy treatment to the FSB’s downgraded but still important rivals within the Russian bureaucracy: the GRU military-intelligence service and the SVR, which retains the main responsibility for foreign espionage (including the maintenance of an extensive network of “sleeper” agents, such as those unmasked in the U.S. over the summer). “The New Nobility” is unbeatable for its depiction of today’s FSB, but the book might have paid more attention to the long-term debilitating effects of the agency’s corruption and nepotism: Those may contain the seeds of the FSB’s ultimate destruction.

Mr. Soldatov and Ms. Borogan rightly highlight the grim results of FSB power in Russia. Its counterterrorism efforts have been a fiasco. Russia faces a terrorist threat from alienated and brutalized Muslims in the North Caucasus that is far worse than it was in the Yeltsin years.

Greed, rather than selfless patriotism, has been the hallmark of Mr. Patrushev’s “new nobility.” The FSB may indeed be in some respects as dreadful as the indolent, spendthrift and brutal Russian aristocracy toppled in the Bolshevik revolution. But that is presumably not the parallel that the grand-duke of spookdom had in mind.

Mr. Lucas is the international editor of the Economist and the author of “The New Cold War: Putin’s Russia and the Threat to the West.”


Full article and photo :

The case against aid

The world’s humanitarian aid organizations may do more harm than good, argues Linda Polman

In 1859, a Swiss businessman named Henry Dunant took a business trip to Italy, where he happened upon the aftermath of a particularly bloody battle in the Austro-Sardinian War. Tens of thousands of soldiers were left dead or wounded on the battlefield, with no medical attention. He was so shaken by the experience that he went on to found what is known today as the International Committee of the Red Cross.

Today, in the vocabulary of war, the ICRC and other aid organization like it are known as the good guys in a world full of bad guys. They swarm into refugee camps all over the world with tents, potable water, flour, and medicine, providing relief and disregarding politics.

But what if those relief efforts ultimately help fighters regain their strength and return to battle, prolonging a terrible war? What if such aid projects are hijacked by genocidal despots to swell their own coffers? What if cynical leaders have learned how to manufacture humanitarian disasters just to attract aid money? And what if the aid groups know all this, but turn a blind eye so that they can compete for a slice of a $160 billion industry?

“The Crisis Caravan,” a new book by journalist Linda Polman, joins a long tradition of exposes written by aid skeptics, many of whom are insiders to the business. Polman was not privy to the inner circle of any aid group, so she often relies on anecdotes told by unnamed sources to make her case. Nevertheless, she gives some powerful examples of unconscionable assistance: How the international community fed Hutu fighters who had committed genocide in Rwanda, and who then continued their violent campaigns from the UN-funded refugee camps; how the Ethiopian government manufactured a famine, and then used aid groups to lure people away from their homes toward a life of forced labor. In Polman’s world, these are not exceptions, but the rule in a world where aid workers have become enablers of the very atrocities they seek to relieve.

Polman, who is based in Amsterdam, spoke to Ideas by telephone from France, and later by telephone from Norway.

IDEAS: What made you so disillusioned by aid work?

POLMAN: I was living in Sierra Leone in West Africa in 2000, 2001, when the peace agreement was signed between the government and the RUF….I was a correspondent for a Dutch newspaper and Dutch radio, covering the war and the UN operations that [were] trying to lure the country out of the hands of the rebels. All the time I was there, the country was in total darkness. There was no electricity. There were no radios. With the peace accord, the aid budget was released for Sierra Leone, and with the release of the aid budget, the caravan of aid was released….In a very short time, there were over 200 NGOS moved into the country.

Everything changed. For the first couple of days, I was happy with that. I thought the country was going to be rescued. But because I knew the country quite well, I saw it was the people I considered the bad guys — the political elites who were responsible for the war — they were the ones who had access to the aid. I thought, this can’t be right. That’s when I started to research what happens in other countries. It is always what happens. It is always the elites and the strongmen who profit.

IDEAS: Your book says that food aid is always used as a weapon of war by the very fighters that create humanitarian disasters in the first place. Is aid always bad? Would the world be a better place without it?

POLMAN: I believe that aid could be given in a much more efficient and less dangerous way….After every humanitarian intervention, the aid organizations analyze what went well and what went wrong. Every analysis…says the weakest point is that the aid organizations are not cooperating well enough, which makes them vulnerable to abuse of the aid.

IDEAS: You don’t cite any examples of good aid projects. Is anybody out there doing it right?

POLMAN: I know of an orphanage in Haiti that has been there for the past 35 years. It has proved over the past 35 years that it is doing a good job. But if the humanitarian world decides en masse to move into one war zone where the bad guys are waiting for them with open arms, they should expect many problems and many instances of abuse.

IDEAS: You talk in your book about how Florence Nightingale eventually developed a philosophy that we should just let wars be as terrible as possible, so that people would stop having them. Would there be less war without aid?

POLMAN: We don’t know, because we never tried to stop aid and then count the amount of wars, or count the amount of days that wars go on. But the thoughts of Florence Nightingale make sense to me. The cost…of the war should be left in the hands of the people who want the war. She thought that if you make it easier for warmongers to have their wars, then you prolong them and make them more severe.

IDEAS: A central tenet of aid workers is political neutrality. In the book, you write that this is often a farce. Should aid take sides in a war? Would it be more effective if it did?

POLMAN: The reality is that aid is not being given a choice. Aid is being used by parties that are at war with each other. Even if aid wants to be neutral, the choice is made for them….If an aid organization cannot decide itself how to distribute aid, when to distribute aid, to whom to distribute aid, if the aid organization doesn’t have the power to make decisions about its own aid, you can do two things. You can say, “Well, that is just reality.” Or you can say, “We will not deliver the aid.”…Medecins Sans Frontieres [Doctors Without Borders] does it sometimes. Sometimes they make the moral stance, and sometimes they don’t.

IDEAS: What is the worst example of abuse of aid that you saw?

POLMAN: In Sierra Leone, I realized that the rebel soldiers who had been hacking off people’s hands and feet, they actually could explain to me how to manipulate the aid system….They explained to me that for 10 years, all those years they were fighting and the West didn’t want to hear about their war. It was only after they started to amputate people, more people and more people, that the international community was taking notice of their war. Those simple rebel soldiers in Africa could explain to me how that aid system works. That alarmed me….

A Security Council report this year concluded that up to half of the World Food program money — $485 million per year — for Somalia is diverted from the people who actually need it, to a web of corrupt contractors, Islamic militants, and local UN staff members who are also involved in this scheme. We can shrug our shoulders about $245 million a year, but in Somalia, this is a lot of money and it is fueling conflict, and it is fueling the wrong people.

Farah Stockman, foreign affairs reporter for the Boston Globe, also runs an educational program for street children in Kenya.


Full article:

‘Delusions of Gender’ argues that faulty science is furthering sexism


How Our Minds, Society, and Neurosexism Create Difference

By Cordelia Fine

About halfway through this irreverent and important book, cognitive psychologist Cordelia Fine offers a fairly technical explanation of the fMRI, a common kind of brain scan. By now, everyone is familiar with these head-shaped images, with their splashes of red and orange and green and blue. But far fewer know what those colors really mean or where they come from.

It’s not as if these machines are taking color videos of the human brain in action — not even close. In fact, these high-tech scanners are gathering data several steps removed from brain activity and even further from behavior. They are measuring the magnetic quality of hemoglobin, as a proxy for the blood oxygen being consumed in particular regions of the brain. If the measurement is different from what one would expect, scientists slap some color on that region of the map: hot, vibrant shades such as red if it’s more than expected; cool, subdued tones if it’s less.

Fine calls this “blobology”: the science — or art — of creating images and then interpreting them as if they have something to do with human behavior. Her detailed explanation of brain-scanning technology is essential to her argument, as it conveys a sense of just how difficult it is to interpret such raw data. She isn’t opposed to neuroscience or brain imaging; quite the opposite. But she is ardently against making authoritative interpretations of ambiguous data. And she’s especially intolerant of any intellectual leap from analyzing iffy brain data to justifying a society stratified by gender. Hence her title, “Delusions of Gender,” which can be read as an intentional slur on the scientific minds perpetrating this deceit.

Fine gives these scientists no quarter, and her beef isn’t just with brain scanners. Consider her critique of a widely cited study of babies’ gazes, conducted when the infants were just a day and a half old. The study found that baby girls were much more likely to gaze at the experimenter’s face, while baby boys preferred to look at a mobile. The scientists took these results as evidence that girls are more empathic than boys, who are more analytic than girls — even without socialization. The problem, not to put too fine a point on it, is that it’s a lousy experiment. Fine spends several pages systematically discrediting the study, detailing flaw after flaw in its design. Again, it’s a somewhat technical, methodological discussion, but an important one, especially since this study has become a cornerstone of the argument that boys and girls have a fundamental difference in brain wiring.

By now, you should be getting a feeling for the tone and texture of this book. Fine offers no original research on the brain or gender; instead, her mission is to demolish the sloppy science being used today to justify gender stereotypes — which she labels “neurosexism.” She is no less merciless in attacking “brain scams,” her derisive term for the many popular versions of the idea that sex hormones shape the brain, which then shapes behavior and intellectual ability, from mathematics to nurturance.

Two of her favorite targets are John Gray, author of the “Men Are From Mars, Women Are From Venus” books, and Louann Brizendine, author of “The Female Brain” and “The Male Brain.” Fine’s preferred illustration of Gray’s “neurononsense” is his discussion of the brain’s inferior parietal lobe, or IPL. The left IPL is more developed in men, the right IPL in women, which for Gray illuminates a lot: He says this anatomical difference explains why men become impatient when women talk too long and why women are better able to respond to a baby crying at night. Fine dismisses such conclusions as nothing more than “sexism disguised in neuroscientific finery.”

Gray lacks scientific credentials. Brizendine has no such excuse, having been trained in science and medicine at Harvard, Berkeley and Yale. And Fine saves her big guns — and her deepest contempt — for her. For the purposes of this critique, Fine fact-checked every single citation in “The Female Brain,” examining every study that Brizendine used to document her argument that male and female brains are fundamentally different. Brizendine cited hundreds of academic articles, making the text appear authoritative to the unwary reader. Yet on closer inspection, according to Fine, the articles are either deliberately misrepresented or simply irrelevant.

“Neurosexism” is hardly new. Fine traces its roots to the mid-19th century, when the “evidence” for inequality included everything from snout elongation to “cephalic index” (ratio of head length to head breadth) to brain weight and neuron delicacy. Back then, the motives for this pseudoscience were transparently political: restricting access to higher education and, especially, the right to vote. In a 1915 New York Times commentary on women’s suffrage, neurologist Charles Dana, perhaps the most illustrious brain scientist of his time, catalogued several differences between men’s and women’s brains and nervous systems, including the upper half of the spinal cord. These differences, he claimed, proved that women lack the intellect for politics and governance.

None of this was true, of course. Not one of Dana’s brain differences withstood the rigors of scientific investigation over time. And that is really the main point that Fine wants to leave the reader pondering: The crude technologies of Victorian brain scientists may have been replaced by powerful brain scanners such as the fMRI, but time and future science may judge imaging data just as harshly. Don’t forget, she warns us, that wrapping a tape measure around the head was once considered modern and scientifically sophisticated. Those seductive blobs of color could end up on the same intellectual scrap heap.

Wray Herbert’s book “On Second Thought: Outsmarting Your Mind’s Hard-Wired Habits” has just been published.


Full article:

Five myths about prostitution

Last weekend, Craigslist, the popular provider of Internet classified advertising, halted publication of its “adult services” section. The move followed criticism from law enforcement officials across the country who have accused the site of facilitating prostitution on a massive scale. Of course, selling sex is an old business — most say the oldest. But as the Craigslist controversy proves, it’s also one of the fastest changing. And as a result, most people’s perceptions of the sex trade are wildly out of date.

1. Prostitution is an alleyway business.

It once was, of course. In the late 1800s, as Northern cities boomed, the sex trade in America became synonymous with the seedy side of town. Men who wanted to find prostitutes combed alleys behind bars, dimly lit parks and industrial corridors. But today, only a few big cities, such as Los Angeles and Miami, still have a thriving outdoor street market for sex. New York has cleaned up Times Square, Chicago’s South Loop has long since gentrified, and even San Francisco’s infamous Tenderloin isn’t what it used to be.

These red-light districts waned in part because the Internet became the preferred place to pick up a prostitute. Even the most down-and-out sex worker now advertises on Craigslist (or did until recently), as well as on dating sites and in online chat forums. As a result, pimps’ role in the sex economy has been diminished. In addition, the online trade has helped bring the sex business indoors, with johns and prostitutes increasingly meeting up in bars, in hotels, in their own homes or in apartments rented by groups of sex workers. All this doesn’t mean a john can’t get what he’s looking for in the park, but he had better be prepared to search awhile.

Although putting numbers on these trends is difficult, the transition from the streets to the Internet seems to have been very rapid. In my own research on sex workers in New York, women who in 1999 worked mostly outdoors said that by 2004, demand on the streets had decreased by half.

2. Men visit sex workers for sex.

Often, they pay them to talk. I’ve been studying high-end sex workers (by which I mean those who earn more than $250 per “session”) in New York, Chicago and Paris for more than a decade, and one of my most startling findings is that many men pay women to not have sex. Well, they pay for sex, but end up chatting or having dinner and never get around to physical contact. Approximately 40 percent of high-end sex worker transactions end up being sex-free. Even at the lower end of the market, about 20 percent of transactions don’t ultimately involve sex.

Figuring out why men pay for sex they don’t have could sustain New York’s therapists for a long time. But the observations of one Big Apple-based sex worker are typical: “Men like it when you listen. . . . I learned this a long time ago. They pay you to listen — and to tell them how great they are.” Indeed, the high-end sex workers I have studied routinely see themselves as acting the part of a counselor or a marriage therapist. They say their job is to feed a man’s need for judgment-free friendship and, at times, to help him repair his broken partnership. Little wonder, then, that so many describe themselves to me as members of the “wellness” industry.

3. Most prostitutes are addicted to drugs or were abused as children.

This was once the case, as a host of research on prostitution long ago confirmed. But the population of women choosing sex work has changed dramatically over the past decade. High-end prostitutes of the sort Eliot Spitzer frequented account for a greater share of the sex business than they once did. And as Barnard College’s Elizabeth Bernstein has shown, sex workers today tend to make a conscious decision to enter the trade — not as a reaction to suffering but to earn some quick cash. Among these women, Bernstein’s research suggests, prostitution is viewed as a part-time job, one that grants autonomy and flexibility.

These women have little in common with the shrinking number of sex workers who still work on the streets. In a 2001 study of British prostitutes, Stephanie Church of Glasgow University found that those working outdoors “were younger, involved in prostitution at an earlier age, reported more illegal drug use, and experienced significantly more violence from their clients than those working indoors.”

4. Prostitutes and police are enemies.

When it comes to the sex trade, police officers have in recent decades functioned as quasi-social workers. Peter Moskos’s recent book, “Cop in the Hood: My Year Policing Baltimore’s Eastern District,” describes how police often play counselor to sex workers, drug dealers and a host of other illegal moneymakers. In my own work, I’ve found that cops are among the most empathetic and helpful people sex workers meet on the job. They typically hand out phone numbers for shelters, soup kitchens and emergency rooms, and they tend to demonstrate a great deal of sympathy for women who have been abused. Instead of arresting an abused sex worker, police officers will usually let her off with a warning and turn their attention to finding her abusive client.

Unfortunately, officers say it is becoming more difficult to help such women; as they move indoors, it is simply more difficult to locate them. Of course, many big-city mayors embrace this same turn of events, since the rate of prostitution-related arrests drops precipitously when cops can’t find anyone to nab. But for police officers, it makes day-to-day work quite challenging.

Officers in Chicago and New York who once took pride in helping women exit the sex trade have told me about their frustration. Abusive men can more easily rob or hurt a sex worker in a building than on the street, they say. And while cops may receive a call about an overheard disturbance, the vague report to 911 is usually not enough to pinpoint the correct apartment or hotel room. There are few things more dispiriting, they say, than hearing of a woman’s cries for help and being unable to find her.

5. Closing Craigslist’s “adult services” section will significantly affect the sex trade.

Although Craigslist offered customers an important means to connect with sellers of sexual services, its significance has probably been exaggerated.

Even before the site’s “adult services” section was shut down, it was falling out of favor among many users. Adolescent pranksters were placing ads as hoaxes. And because sex workers knew that cops were spending a lot of time responding to ads, they were increasingly hesitant to answer solicitations. I found that 80 percent of the men who contacted women via Craigslist in New York never consummated their exchange with a meeting.

How the sex trade will evolve from here is anyone’s guess, but the Internet is vast, and already we are seeing increasing numbers of sex workers use Twitter and Facebook to advertise their services. Apparently, the desire to reveal is sometimes greater than the desire to conceal.

Sudhir Venkatesh is a professor of sociology at Columbia University and the author of “Gang Leader for a Day: A Rogue Sociologist Takes to the Streets.”


Full article:

Salvation in Small Steps

With the collapse of various ideologies and totalizing nostrums, human rights became ever more important in world affairs. Brendan Simms reviews Mr. Moyn’s “The Last Utopia: Human Rights in History.”

In their classic essay collection, “The Invention of Tradition” (1983), the historians Eric Hobsbawm and Terence Ranger showed how many features of British society that seem to be rooted in time immemorial, such as public-school rituals and royal ceremonials, are actually of recent provenance. Similarly, in “The Last Utopia,” Samuel Moyn challenges the notion that something now so well-established as the idea of human rights—foundational rights that individuals possess against enslavement, religious oppression, political imprisonment and other brutalities of arbitrary governments—had its origins in the remote past. This “celebratory” approach, he charges, uses history to “confirm the inevitable rise” of human rights “rather than register the choices that were made and the accidents that happen.” The truth, Mr. Moyn shows, is that human rights, as we understand them today, are a “recent and contingent” development.

Mr. Moyn quickly disposes of the idea that human rights originated with the Greeks, who after all kept slaves, or even with the French revolutionaries of the late 18th century, whose “Rights of Man” led to the Terror. More controversially, Mr. Moyn denies that the experience of World War II and the Holocaust produced a decisive shift in our understanding of how to guard against systematic assaults on human life and dignity. Admittedly, the United Nations did issue the Universal Declaration of Human Rights in 1948, but this document led only to a cul-de-sac; it had few practical effects. Nor did the concept come riding in on the back of the anticolonialism sweeping the world in the 1950s and 1960s, which was focused on self-determination, not individual rights.

The breakthrough, Mr. Moyn argues, came only in the 1970s. This decade saw the Jackson-Vanik amendment of 1974, which tied U.S. trade with the Soviet Union to the right of Soviet citizens to emigrate. It was followed in 1975 by the Helsinki Accords, which required that the signatories, including the Soviet Union, respect “freedom of thought, conscience, religion [and] belief,” to quote the accord itself.

Such principles were soon used by Eastern European and Soviet dissidents to challenge the logic of the Soviet empire itself. The charisma of various figures—Natan Sharansky, Andrei Sakharov, Václav Havel, Adam Michnik—gave human rights the aspect of an international “cause,” and in 1977 Amnesty International—whose work on behalf of political prisoners epitomized the new focus on individual rights—was awarded the Nobel Peace Prize. Soon after, the administration of Jimmy Carter made human rights an integral part of official American policy, insisting that they be respected not only by the hostile Soviet Union but also by allied powers such as South Korea, though Mr. Carter was much softer on the shah’s Iran. In this way, as Mr. Moyn puts it, “human rights were forced to move not only from morality to politics, but also from charisma to bureaucracy.”

The reasons for this shift were numerous. Human rights had always been a part of the West’s Cold War policies, but their force had been blunted by the continued existence of European empires and, later, by the U.S. presence in Vietnam, where the brutality of war made it hard for America to serve as a moral arbiter. After decolonization and the withdrawal from Indochina, however, the battle was rejoined to devastating effect. The Soviet Union, a virtual police state, had nowhere to hide. Meanwhile, the experience of a decade or more of African and Asian independence had hardly been an advertisement for the moral purity of newly “free” states, where rights could be newly violated. A political consensus began to form that crossed party divides. “We’ll be against the dictator’s you don’t like the most,” Sen. Daniel Patrick Moynihan told a rival, “if you’ll be against the dictators we don’t like the most.”

Most important of all, however, was the intellectual and emotional effect of the collapse of alternative ideologies. Over the course of 70 years or so communism, anticolonialism and even the grandiose designs of the West’s expanded welfare states had failed to deliver on their bright promises. Human rights, Mr. Moyn claims, were thus “the last utopia.” Unlike the totalizing nostrums of the past, they offered salvation in small, manageable steps—”saving the world one individual at a time,” as one activist put it.

The arguments in “The Last Utopia” are persuasive, but the book is not without its problems. It is true that Mr. Moyn’s past rights-champions did not advocate the utopian program of the 1970s in every respect, but they were less far off than he concedes. The Cold Warriors behind the European Convention on Human Rights in 1950, for example, were surely close to Mr. Carter in the late 1970s in their insistence on political and civil rights rather than the broad spectrum of so-called social and economic rights demanded by the political left.

Mr. Moyn exposes the political motivations behind much of human-rights history—the supporters of the “humanitarian” interventions of the 1990s, for instance, cited human rights as a pedigree for their preferred policies. But his own views occasionally surface. We are never told why it is “disturbing” that the Reagan era saw an “assimilation of human rights to the inadequately developed program of ‘democracy promotion’ “; after all, the administration’s support for dissident groups in Eastern Europe throughout the 1980s did much to undermine Soviet autocracy there. Nor is it obvious that neoconservative arguments about the universality of human rights have had “many tragic consequences.” No matter. The triumph of “The Last Utopia” is that it restores historical nuance, skepticism and context to a concept that, in the past 30 years, has played a large role in world affairs.

Mr. Simms, a professor of international relations at Cambridge University, is the author of “Three Victories and a Defeat: The Rise and Fall of the First British Empire.”


Full article and photo:

Politics and the Cult of Sentimentality

Wilde said that sentimentality is the desire to have the luxury of emotion without paying for it.

When, as in my case, you have identified what you think is a social trend—the increasing sentimentality of public discourse, which brings with it disastrous practical consequences—you begin to see examples of it everywhere.

On Thursday of last week, for example, I happened to be reading an article in Le Monde while waiting for a plane at Charles de Gaulle airport. The article took up a whole page and was titled “Las Vegas Inferno.” “Inferno” was written in letters an inch tall.

I hold no particular brief for Las Vegas. I would like to see it, but only in the sense that I wanted to see North Korea (and did): One should experience all that one can of the world, and Las Vegas is surely unique.

The inferno of the article was that of the homeless of the city, 300-500 of whom live in the concrete-and-steel tunnels built in the 1970s as drains for the torrential rains that often afflict Nevada. The article says of the people who live in them that they are “the poorest of the poor, poverty-stricken rejects in the entrails of the gilded city.”

Poverty-stricken rejects in the entrails of the gilded city: The words suggest a terrible and cruel injustice done to them. But who, exactly, has rejected them, and thereby forced them into the entrails? This way of putting it inevitably turns them into victims of a cruel world.

Three cases are mentioned—those of Craig, David and Medina. Craig has lived in the tunnels for five years, and his belongings have been washed away three times in the past few months. His food is paid for with food stamps; he gathers money left behind in the one-armed bandits in the casinos above-ground to buy cannabis—”my only drug,” he says. No further details are offered as to why he resorted to living in the tunnels in the first place.

David, who has a long scar on his face that is ravaged by alcohol, came to Las Vegas attracted by “the eldorado of greenbacks and the promise of endless job opportunities.” Then, in the words of the article, he knew “that slow decline when gambling debts become insurmountable and drugs replace friends.”

On this view of things, the gambling debts and the drugs that replaced friends had an existence independent of his behavior. They had agency in his life, unlike him. The debts came and took his money away and the drugs arrived and forced him to take them, contrary to the wishes of his friends. David is therefore a victim, and nothing but a victim.

Medina, aged 36, is an Indian woman, and she has recently escaped the tunnels. Her beauty has been destroyed by “abuse and maltreatment.” She has five children, whom she hardly knows. I hope I shall not be accused of cultural insensitivity when I write that she must nevertheless have known where they came from.

It was her lover, Manny, “who first dragged me down there into the tunnels.” She thought at first that he was going to kill her, but she went nonetheless, and they stayed there a year. New building works in the tunnels rendered their situation untenable (though previously in the article we have learned that there are 300 kilometers of such tunnels to choose from) and “then I believed that I wanted to see my children again.”

What is startling about all this is that the author of the article evinces no curiosity about how the three came to be in the situation he describes. Why not? The questions to ask are so obvious that one must wonder why he did not ask them.

Part of the problem is that he sees Las Vegas as a manifestation of “the American Dream,” though actually it is a perversion of that dream, and he wants to demonstrate the badness or cruelty of that dream. No doubt the authentic dream—that of individuals endlessly free to reinvent and advance themselves—also has a dark side, as American literature records. But the tunnels under Las Vegas are not it.

The main reason that the author does not ask the obvious questions is that to have done so would have been to reduce the sentimental reaction that he wanted to evoke in his readers. And a little reflection shows that this reaction depended on a rather cruel premise: that if people are to any considerable extent the authors of their own misfortunes, we should exclude them from our pity. Instead, we turn them into the passive victims of circumstance.

Does it matter that we do this? I think that it does. Sentimentality allows us to congratulate ourselves on our own warmth and generosity of heart. Oscar Wilde said that sentimentality is the desire to have the luxury of emotion without paying for it. It turns the people on whom it is bestowed into objects. It attempts, often successfully, to disguise from them their own part in their downfall. It suggests solutions to problems that do not, because they cannot, work. Sentimentality is the ally of ever-expanding bureaucracy, for the more a solution doesn’t work, the more of it is needed.

Theodore Dalrymple is the pen name of Anthony Daniels. His latest book is “Spoilt Rotten: The Toxic Cult of Sentimentality” (Gibson Square, 2010).


Full article:

Forget What You Know About Good Study Habits

Every September, millions of parents try a kind of psychological witchcraft, to transform their summer-glazed campers into fall students, their video-bugs into bookworms. Advice is cheap and all too familiar: Clear a quiet work space. Stick to a homework schedule. Set goals. Set boundaries. Do not bribe (except in emergencies).

And check out the classroom. Does Junior’s learning style match the new teacher’s approach? Or the school’s philosophy? Maybe the child isn’t “a good fit” for the school.

Such theories have developed in part because of sketchy education research that doesn’t offer clear guidance. Student traits and teaching styles surely interact; so do personalities and at-home rules. The trouble is, no one can predict how.

Yet there are effective approaches to learning, at least for those who are motivated. In recent years, cognitive scientists have shown that a few simple techniques can reliably improve what matters most: how much a student learns from studying.

The findings can help anyone, from a fourth grader doing long division to a retiree taking on a new language. But they directly contradict much of the common wisdom about good study habits, and they have not caught on.

For instance, instead of sticking to one study location, simply alternating the room where a person studies improves retention. So does studying distinct but related skills or concepts in one sitting, rather than focusing intensely on a single thing.

“We have known these principles for some time, and it’s intriguing that schools don’t pick them up, or that people don’t learn them by trial and error,” said Robert A. Bjork, a psychologist at the University of California, Los Angeles. “Instead, we walk around with all sorts of unexamined beliefs about what works that are mistaken.”

Take the notion that children have specific learning styles, that some are “visual learners” and others are auditory; some are “left-brain” students, others “right-brain.” In a recent review of the relevant research, published in the journal Psychological Science in the Public Interest, a team of psychologists found almost zero support for such ideas. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.

Ditto for teaching styles, researchers say. Some excellent instructors caper in front of the blackboard like summer-theater Falstaffs; others are reserved to the point of shyness. “We have yet to identify the common threads between teachers who create a constructive learning atmosphere,” said Daniel T. Willingham, a psychologist at the University of Virginia and author of the book “Why Don’t Students Like School?”

But individual learning is another matter, and psychologists have discovered that some of the most hallowed advice on study habits is flat wrong. For instance, many study skills courses insist that students find a specific place, a study room or a quiet corner of the library, to take their work. The research finds just the opposite. In one classic 1978 experiment, psychologists found that college students who studied a list of 40 vocabulary words in two different rooms — one windowless and cluttered, the other modern, with a view on a courtyard — did far better on a test than students who studied the words twice, in the same room. Later studies have confirmed the finding, for a variety of topics.

The brain makes subtle associations between what it is studying and the background sensations it has at the time, the authors say, regardless of whether those perceptions are conscious. It colors the terms of the Versailles Treaty with the wasted fluorescent glow of the dorm study room, say; or the elements of the Marshall Plan with the jade-curtain shade of the willow tree in the backyard. Forcing the brain to make multiple associations with the same material may, in effect, give that information more neural scaffolding.

“What we think is happening here is that, when the outside context is varied, the information is enriched, and this slows down forgetting,” said Dr. Bjork, the senior author of the two-room experiment.

Varying the type of material studied in a single sitting — alternating, for example, among vocabulary, reading and speaking in a new language — seems to leave a deeper impression on the brain than does concentrating on just one skill at a time. Musicians have known this for years, and their practice sessions often include a mix of scales, musical pieces and rhythmic work. Many athletes, too, routinely mix their workouts with strength, speed and skill drills.

The advantages of this approach to studying can be striking, in some topic areas. In a study recently posted online by the journal Applied Cognitive Psychology, Doug Rohrer and Kelli Taylor of the University of South Florida taught a group of fourth graders four equations, each to calculate a different dimension of a prism. Half of the children learned by studying repeated examples of one equation, say, calculating the number of prism faces when given the number of sides at the base, then moving on to the next type of calculation, studying repeated examples of that. The other half studied mixed problem sets, which included examples all four types of calculations grouped together. Both groups solved sample problems along the way, as they studied.

A day later, the researchers gave all of the students a test on the material, presenting new problems of the same type. The children who had studied mixed sets did twice as well as the others, outscoring them 77 percent to 38 percent. The researchers have found the same in experiments involving adults and younger children.

“When students see a list of problems, all of the same kind, they know the strategy to use before they even read the problem,” said Dr. Rohrer. “That’s like riding a bike with training wheels.” With mixed practice, he added, “each problem is different from the last one, which means kids must learn how to choose the appropriate procedure — just like they had to do on the test.”

These findings extend well beyond math, even to aesthetic intuitive learning. In an experiment published last month in the journal Psychology and Aging, researchers found that college students and adults of retirement age were better able to distinguish the painting styles of 12 unfamiliar artists after viewing mixed collections (assortments, including works from all 12) than after viewing a dozen works from one artist, all together, then moving on to the next painter.

The finding undermines the common assumption that intensive immersion is the best way to really master a particular genre, or type of creative work, said Nate Kornell, a psychologist at Williams College and the lead author of the study. “What seems to be happening in this case is that the brain is picking up deeper patterns when seeing assortments of paintings; it’s picking up what’s similar and what’s different about them,” often subconsciously.

Cognitive scientists do not deny that honest-to-goodness cramming can lead to a better grade on a given exam. But hurriedly jam-packing a brain is akin to speed-packing a cheap suitcase, as most students quickly learn — it holds its new load for a while, then most everything falls out.

“With many students, it’s not like they can’t remember the material” when they move to a more advanced class, said Henry L. Roediger III, a psychologist at Washington University in St. Louis. “It’s like they’ve never seen it before.”

When the neural suitcase is packed carefully and gradually, it holds its contents for far, far longer. An hour of study tonight, an hour on the weekend, another session a week from now: such so-called spacing improves later recall, without requiring students to put in more overall study effort or pay more attention, dozens of studies have found.

No one knows for sure why. It may be that the brain, when it revisits material at a later time, has to relearn some of what it has absorbed before adding new stuff — and that that process is itself self-reinforcing.

“The idea is that forgetting is the friend of learning,” said Dr. Kornell. “When you forget something, it allows you to relearn, and do so effectively, the next time you see it.”

That’s one reason cognitive scientists see testing itself — or practice tests and quizzes — as a powerful tool of learning, rather than merely assessment. The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.

Dr. Roediger uses the analogy of the Heisenberg uncertainty principle in physics, which holds that the act of measuring a property of a particle alters that property: “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.

In one of his own experiments, Dr. Roediger and Jeffrey Karpicke, also of Washington University, had college students study science passages from a reading comprehension test, in short study periods. When students studied the same material twice, in back-to-back sessions, they did very well on a test given immediately afterward, then began to forget the material.

But if they studied the passage just once and did a practice test in the second session, they did very well on one test two days later, and another given a week later.

“Testing has such bad connotation; people think of standardized testing or teaching to the test,” Dr. Roediger said. “Maybe we need to call it something else, but this is one of the most powerful learning tools we have.”

Of course, one reason the thought of testing tightens people’s stomachs is that tests are so often hard. Paradoxically, it is just this difficulty that makes them such effective study tools, research suggests. The harder it is to remember something, the harder it is to later forget. This effect, which researchers call “desirable difficulty,” is evident in daily life. The name of the actor who played Linc in “The Mod Squad”? Francie’s brother in “A Tree Grows in Brooklyn”? The name of the co-discoverer, with Newton, of calculus?

The more mental sweat it takes to dig it out, the more securely it will be subsequently anchored.

None of which is to suggest that these techniques — alternating study environments, mixing content, spacing study sessions, self-testing or all the above — will turn a grade-A slacker into a grade-A student. Motivation matters. So do impressing friends, making the hockey team and finding the nerve to text the cute student in social studies.

“In lab experiments, you’re able to control for all factors except the one you’re studying,” said Dr. Willingham. “Not true in the classroom, in real life. All of these things are interacting at the same time.”

But at the very least, the cognitive techniques give parents and students, young and old, something many did not have before: a study plan based on evidence, not schoolyard folk wisdom, or empty theorizing.

Benedict Carey, New York Times


Full article and photo :

Ephemera in Full

The Sage of Baltimore was not always so sagacious

H.L. Mencken (1880-1956) is a revered figure in the history of American letters, and understandably so. But after enduring the heavy weather of these two Library of American volumes—a gathering of Mencken essays and journalism originally published between 1919 and 1927 in a series of books called “Prejudices”—I am beginning to have my doubts. I remember Tom Wolfe once telling me that Mencken was one of the greatest stylists of the English language, alongside Malcolm Muggeridge, and George Orwell. Again, it is hard to disagree. But I strongly suggest that Mr. Wolfe purge the “Prejudices” from his library.


H.L. Mencken at his writing desk in the mid-1940s.

Some of the pieces in the “Prejudices” series—there were six volumes in all—are very good, as I had remembered. In particular, Mencken’s essay on William Jennings Bryan, the prairie populist and endless presidential candidate, remains a classic and well worth re-reading. But the vast majority of the pieces in “Prejudices” are tedious and ephemeral, even terrible at times.

Anyone seeking the reasons for Mencken’s high reputation would do better by turning to Huntington Cairns’s “The American Scene” (1965), an anthology that judiciously selects from Mencken’s autobiographical works, his writings on the American language and his various superb efforts at reportage, including his famous account of the 1925 Scopes Trail, in which fundamentalist religion famously butted heads with evolutionary theory.

Cairns, it is true, included some flatulent “Prejudices” essays in his anthology, but with explanations of their origin—either from Mencken or from Cairns himself—along with the dates of the essays’ original publication. There are no dates included in the Library of America volumes and no contextual introductions to the pieces offered. Much of the time we have no idea what Mencken is shouting about. He comes off as a gasbag.

The appendix to the first Library of America volume includes a selection from Mencken’s posthumous “My Life as Author and Editor” in which he comments on the “Prejudices” series. He tells us, quoting himself, that the first series, published in 1919, was “a stinkpot designed to ‘keep the animals perturbed.’ ” But he confesses that the collection contained “light stuff, chiefly rewritten from the Smart Set,” the magazine that Mencken edited with George Jean Nathan from 1914 to 1923. “The real artillery fire,” Mencken wrote, “will begin a bit later.” Where it did begin it was often off the mark—for instance, not 200,000 soldiers dead in the Civil War, as he says, but 621,000.

Mencken admits that the pieces in “Prejudices: Second Series” (1920) are not original. He was still larding up what he considered important essays with “surplus material left out of the 1922 revision of In Defense of Women” and other writings, including, as he put it, “reworkings of my Smart Set reviews and my contributions to ‘Répétition Générale.’ “

The “Répétition Général” that Mencken mentions was a running Smart Set feature offering facetious definitions of trends and types and brief editorial comments. To take an example not included in the Library of America volumes: “The Bald-Headed Man: The man with a bald head, however eminent his position, always feels slightly ill at ease in the presence of a man whose dome is still well thatched.” Clearly much of the material in the Smart Set was not of great weight.

Mencken continued such rewrites and regurgitations for an additional four “Prejudices.” He is at his worst when he writes on what he considers important topics: the South, farmers, the national letters, the American character.

It is always amusing to call a farmer “a prehensile moron.” Or to compare a politician to “an honest burglar.” But often Mencken simply falls into a gimmick. He strings together absurd similes, preposterous comparisons and long lists, and there is an enormous amount of repetition. After a while, it all becomes tiresome.

H.L. Mencken: Prejudices


In the essay “On Being an American,” he writes that a man who has to make a living in the U.S. must keep in mind that “the Republic has never got half enough bond salesmen, quack doctors, ward leaders, phrenologists, Methodist evangelists, circus clowns, magicians, soldiers, farmers, popular song writers, moonshine distillers, forgers of gin labels, mine guards, detectives, spies, snoopers, and agents provocateurs.” One gets the point quickly, and yet he goes on an on.

Later, after running a sentence for 17 lines, he ends by referring to “thousands [of Americans] who put the Hon. Warren Gamaliel Harding beside Friedrich Barbarossa and Charlemagne, and hold the Supreme Court to be directly inspired by the Holy Spirit, and belong ardently to every Rotary Club, Ku Klux Klan, and anti-Saloon League, and choke with emotion when the band plays ‘The Star-Spangled Banner,’ and believe with the faith of little children that one of Our Boys, taken at random, could dispose in a fair fight of ten Englishmen, twenty Germans, thirty Frogs, forty Wops, fifty Japs, or a hundred Bolsheviki.” There is a lot of padding here.

In the same essay he says that the American belief in the good life or progress or happy landings or something “is not shared by most reflective foreigners, as anyone may find out by looking into such a book as Ferdinand Kürnberger’s ‘Der Amerikamünde,’ Sholom Asche’s ‘America,’ Ernest von Wolzogen’s ‘Ein Dichter in Dollarica,’ W.L. George’s ‘Hail, Columbia!’, Annalise Schmidt’s ‘Der Amerikanische Mensch’ or Sienkiewicz’s ‘After Bread,’ or by hearkening unto the confidences, if obtainable, of such returned immigrants as Georges Clemenceau, Knut Hamsun, George Santayana, Clemens von Pirquet, John Masefield, and Maxim Gorky and, via the ouija board, Antonin Dvorak, Frank Wedekind and Edwin Klebs.” Such strings of slightly ominous names could be seen as part of the “artillery fire” Mencken referred to in his posthumous reflections.

As I say, Mencken was a superb reporter, and when he stuck to reporting he was an original. In “Prejudices: Fifth Series,” he was running out of steam, but then comes his incomparable “In Memoriam: W.J.B.” It begins: “Has it been duly marked by historians that the late William Jennings Bryan’s last secular act on this globe of sin was to catch flies?”

Mencken takes us to the Dayton, Tenn., monkey trial, reporting on Bryan’s confrontation with Clarence Darrow, and his eyes are wide open: Bryan “liked people who sweated freely, and were not debauched by the refinements of the toilet.” Bryan makes “progress up and down the Main Street of little Dayton, surrounded by gaping primates from the uplands. . . . There stood the man who had been thrice a candidate for the Presidency of the Republic—there he stood in the glare of the world, uttering stuff that a boy of eight would laugh at! The artful Darrow led him on.”

The next essay in the collection is even better. In “The Hills of Zion,” Mencken actually attends a meeting of locals who speak in tongues and sweat a lot. “The heap of mourners was directly before us. They bounced into us as they cavorted. The smell that they radiated, flooding there in that obscene heat, half- suffocated us. Not all of them, of course, did the thing in the grand manner. Some merely moaned and rolled their eyes.” It is all here, even Mencken’s speculations of a lewd nature.

Mencken was the first celebrity intellectual. Mass communications was in place, and he was present to take advantage of it. He was a brilliant stylist, and when he stuck to reporting, Tom Wolfe had him right. But not in these pieces, and not in his crank diatribes. He flourished in the first quarter of the century, but I doubt there would be room in America for him now. His prose style aside, he was an independent mind. There are only two camps today, and he would be in neither.

Mr. Tyrrell, a syndicated columnist, is editor in chief of The American Spectator. His current book is “After The Hangover: The Conservatives” Road to Recovery,” published by Thomas Nelson.


Full article and photos:

Mark Pilkington’s top 10 books about UFOs

Down to earth accounts … Alien Parking sign in Roswell, New Mexico.

Mark Pilkington is a writer with a fascination for the further shores of culture, science and belief. He also publishes books as Strange Attractor Press. In Mirage Men subject’s history and meeting former air force and intelligence insiders, Pilkington concludes that instead of covering up tales of UFO crashes and alien visitors, the US military and intelligence services have been promoting them all along as part of their cold war counter-intelligence operations.

“The UFO arena acts as a kind of vivarium for a range of psychological, sociological and anthropological experiences, beliefs, conditions and behaviours. They remind us that the Unknown and the Other are still very much at large in our modern world, and provide us with a fascinating glimpse of folklore in action. A tiny few UFO reports also still present us with genuine mysteries.

“The first book about UFOs as we know them was The Flying Saucer, a 1948 novel by British former spy Bernard Newman. I’m not sure how many UFO books have been written since then, but I’d guess that it’s well over 1000. Here, in chronological order, are 10 that I can recommend as either informative, entertaining, puzzling or all three at once.”

1. The Report on Unidentified Flying Objects by Edward J Ruppelt

An insider’s account of the crucial, early days of the UFO story, by the man who headed the US Air Force’s official UFO investigation from 1951 to 1953. Ruppelt documents shifting Air Force attitudes to the phenomenon, which ranged from aggressive denial to apparent endorsement of alien visitation in an infamous 1952 Life magazine article. In a revised edition, published in 1960, Ruppelt was more dismissive of the subject. He died the same year, aged 37.

2. Flying Saucer Pilgrimage by Bryant and Helen Reeve

A charming glimpse into the early days of the UFO culture, when the lines between spiritualism, occultism and ufology were largely indistinguishable. The Reeves travelled the US in search of “the Saucerers”, meeting many key figures of the time before making contact with real Space People via the wonders of Outer Space Communication (OSC) and a portable tape recorder. Many important questions are answered: How do we look to the space people? Do they believe in Jesus Christ? Is this civilisation ending?

3. Flying Saucers: A Modern Myth of Things Seen in the Sky by Carl Jung

It was only natural that the Swiss mystic and philosopher-shrink, fascinated by anomalous experiences, should turn his attention to the UFO mystery. Considering UFOs as a “visionary rumour” and a manifestation of the mythic unconscious, Jung compares the perfect circle of the flying disc to the mandala, notes the dreamlike impossibility of many reports and presciently recognises the deep spiritual pull that the UFO would exert over the next half century.

4. The UFO Experience By J Allen Hynek

Astronomer Hynek was an air force consultant on UFOs for much of his life, and over time transformed from something of a Doubting Thomas to a St Paul. He’s regarded as a saint in UFO circles, largely for this book, a sober yet sympathetic overview of the UFO problem that excoriates the US Air Force for their failure to treat the phenomenon seriously. Hynek devised the “Close Encounters” system for categorising UFO sightings, and has a cameo during the cosmic disco climax of Spielberg’s blockbusting film (that’s him with the pipe looking like Colonel Sanders).

5. The Mothman Prophecies by John Keel

Merging unconscious deceptions with deliberate fictions, many of the wilder UFO books would have even the most intrepid postmodernists cowering behind the sofa. Keel, however, was a two-fisted trickster who knew exactly what he was doing and this reads like Thomas Pynchon crossed with Philip K Dick channelling HP Lovecraft. In the late 1960s Point Pleasant, West Virginia was plagued by bizarre entities, UFO sightings and robotic, jelly-fixated Men in Black; Keel investigated only to find himself in too deep and the town doomed to real-life disaster.

6. Messengers of Deception by Jacques Vallée

An intriguing, disconcerting book from one of the field’s most progressive thinkers. Vallée, a French astronomer and computer scientist who worked with J Allen Hynek, became entangled in bizarre mind games while investigating UFO cults in the 1970s. Amongst others, Vallée encountered HIM (Human Individual Metamorphosis), led by “Bo and Peep” who would steer the Heaven’s Gate group to their collective death two decades later.

7. Report on Communion by Ed Conroy

Whitley Strieber’s Communion is one of the 20th century’s great literary mysteries and Conroy’s spinoff is just as curious. A hard-nosed investigative journalist, Conroy examined Strieber’s alleged alien abduction experiences and odd life story while also researching the history of UFOs and its parallels in folkloric encounter narratives. In a testament to the power of UFOria and the allure of the Other, by the end of the book he’s being buzzed by shape-shifting helicopters and wondering whether he too has had contact with the Visitors.

8. Remarkable Luminous Phenomena in Nature by William Corliss

One of at least 18 hardback volumes of anomalies collected by this modern-day Charles Fort. Ball lightning (miniature, giant, black, object-penetrating and ordinary), bead lightning, lightning from clear skies, pillars of light, glowing owls, luminous bubbles, oceanic light wheels, earthquake lights, marsh gas, unusual auroras, glowing fogs. And that’s just for starters. I love this book.

9. The Trickster and the Paranormal by George Hansen

Hansen, a former professional laboratory parapsychologist, provides illumination, insight and perspective on the wider paranormal research field, UFOs included. Drawing on folklore, anthropology, literary theory and sociology, Hansen points out the integral, destabilising role of Trickster archetypes in human society. While dwelling predominantly amongst its esoteric fringes, the Trickster can also be seen lurking in the corridors of political, military and corporate power.

10. Out of the Shadows by David Clarke and Andy Roberts

A rock-solid history of the UFO phenomenon in Britain by two of our most reliable and indefatigable researchers. Clarke and Roberts work from interviews and official documentation detailing everything from genuine aerial mysteries during the second world war (investigated for the RAF by the Goon Show’s Michael Bentine) to the cold war follies of 1980′s Rendlesham Forest incident. Serious UFO research as it should be done.


Full article and photo:

The pursuit of evil

A complicated man, obsessed by his search for justice

Driven by memory

Simon Wiesenthal: The Life and Legends. By Tom Segev. Doubleday; 482 pages; $35. Jonathan Cape; £25.

AMONG the 300,000 pieces of paper in Simon Wiesenthal’s private archive is a letter from a Holocaust survivor explaining why he had ceased to believe in God. In Tom Segev’s description: “God had allowed SS troops to snatch a baby from his mother and then use it as a football. When it was a torn lump of flesh they tossed it to their dogs. The mother was forced to watch. Then they ripped off her blouse and made her use it to clean the blood off their boots.”

What made a man who survived three concentration camps cancel plans after the war to move first to America, then Israel, and instead devote his life in Vienna to amassing and immersing himself in memories that most survivors spent the rest of their days trying to forget? The famed “Nazi hunter” tended to guard his emotions by wrapping them in anecdotes for public consumption: he would talk of a girl he had seen being marched towards a mass grave whose desperate look seemed to say “Don’t forget us”. In his need to protect sources and conceal his work with government agencies, including Israel’s Mossad, such anecdotes spun out of control, multiplying into many different versions in his books and interviews.

Mr Segev, justly celebrated for his histories of formative moments of the state of Israel, is as careful a biographer as he is an historian, and he excels at teasing apart these conflicting tales. The picture that emerges is often unflattering. Wiesenthal comes across as a self-important busybody, obsessed with titles and recognition, squabbling with rivals. “Contrary to the myth he spun around himself”, Mr Segev writes, “he never operated a worldwide dragnet” but ran a virtually one-man show out of a cramped office.

Yet Mr Segev also comes to his subject’s defence when warranted. In the bitterly contested matter of the capture of Adolf Eichmann, for instance, he finds that Wiesenthal does deserve much credit for bringing the Nazi war criminal’s hideout in Argentina to the attention of the Israeli, German and American authorities, who took years to act—though he also describes Wiesenthal’s subtle manoeuvres to increase his share of the glory afterwards. As to Wiesenthal’s defence of Kurt Waldheim, the Austrian president who turned out to have lied about his war record—a controversy that cost Wiesenthal the Nobel peace prize—Mr Segev dissects his behaviour and finds it explicable, if not excusable.

The ultimate judgment is a compassionate one. Wiesenthal was driven, Mr Segev concludes, by guilt at not only having survived, but having had an easier time than most European Jews during much of the war. His mythomania may have been a way to burnish his image. But it also served in his pursuit of justice. His reputation, as well as a superb memory and a knack for networking, made him a magnet for countless scraps of information about suspected war criminals which he passed on to the authorities, badgering them relentlessly to make arrests. He battled official indifference, anti-Semitic attacks and, for many years, a chronic lack of funds.

How many Nazis he really helped to jail is impossible to say. More important was what he did to bring the Holocaust’s victims and their horrendous memories to worldwide attention. Gripping yet sober, this meticulous portrait of a complicated man is unlikely to be bettered.


Full article and photo: