Full article and photo:
Full article and photo:
Some of my favorite books are the ones I’ve never opened
Like most readers, I love browsing in bookshops and libraries. I enjoy running my fingers along the spines of books and reading titles and authors’ names, pulling the books out and flipping through them, thinking about the stories inside them.
There are hundreds of films I’ve never seen, thousands of songs I’ve never heard. But I don’t anticipate them the way that I do books. I don’t imagine the things I would learn from them, how my life would be subtly but surely different after I had experienced them. With books, the anticipation is different. In fact, with books, it is sometimes the best part.
Last week I bought a book. I looked at the blurb and read the first paragraph, and I could feel the texture of the book in my mind. It was going to be a steadily paced yet exciting coming-of-age story about three young girls who go camping in the woods, stumble across a couple vacationing in a cabin, and see things through the windows that upend their world. It would move from the girls in their clumsy tent, to their fable-like journey through the forest, to the glowing windows of the cabin. The story was going to be overflowing with the smell of mulching leaves, the stale sweetness of fizzy drinks on the tongue, the crackle of empty sweet wrappers. It was going to be honest and real and uncomfortably sensual.
Except that it wasn’t about that at all: It was a thriller about a woman having an affair. With every sentence I read, the book I had imagined shrank smaller and smaller. By the end of the third page, it had disappeared. The actual book was by no means bad, it just wasn’t the book I thought it would be. That dense, bittersweet story I had anticipated reading did not exist, and I felt a sense of loss, a yearning for something unreal. And yet somehow I had read that nonexistent book, because I had created it myself. I was not changed by the experience of reading that book, but perhaps I was changed by my own anticipation of what it could have been.
So I save books. I buy a book with every intention of reading it, but then the more I look at it and think about how great it is going to be, the less I want to read it. I know that it can’t possibly live up to my expectations, and slowly, the joy of my own imaginings becomes more precious to me than whatever actually lies between the covers.
Most books I read just get chewed up and spat out. I enjoy them, but ask me in a year and all I’ll remember is a vague shape of plot, the sense of a character, perhaps the color of a sky in summer, or the taste of borscht. My favorite books, of course, do stay with me. They shift and color my world, and I am different for having read them. But before reading a book, there’s no way to know which it will be: a slick of lipstick that I wear for a day and then wipe off, or a tattoo that stays on my body forever. An unread book has the potential to be the greatest book I have ever read. Any unread book could change my life.
I currently have about 800 unread books on my shelves. Some would find this excessive, and they would probably be right. But to me, my imagined library is as personal and meaningful — or perhaps even more so — than the collection of books I have read. Each book is intense and vivid in my mind; each book says complex things about my life, history, and personality. Each book has taught me something about the world, or at least about my own expectations of the world, my idea of its possibilities. Here are some examples:
I think that Mervyn Peake’s “Gormenghast Trilogy” is at once claustrophobic and expansive. It has the texture of solid green leaves crunched between your molars. It tastes of sweetened tea and stale bread and dust. When I read it, I will feel close to my father because it is his favorite book. Reading the Gormenghast books will allow me to understand my father in ways I currently do not, and at certain points in the book I will put it down and stare into the middle distance and say aloud, “Oh. Now my childhood makes sense.”
Radclyffe Hall’s “The Well of Loneliness” will make me sad and proud and indignant. I will no longer get tangled up in discussions about gender issues, because I will finally have clear-cut and undeniable examples of how gender stereotyping is bad for everyone. Reading it will make me feel like an integral part of queer history and culture, and afterwards I will feel mysteriously connected to all my fellow LGBT people. Perhaps I will even have gaydar.
Roberto Bolaño’s “2666” is an obsessive and world-shifting epic. When I read it, I will be completely absorbed by it. It will be all I think about. It will affect my daily life in ways I can’t fully understand, and when I finish it, I will have come to profound revelations about the nature of existence. I will finally understand all the literary theory I wrote essays on when I was at college.
Manuel Puig’s “Kiss of the Spider Woman” has been on my shelves for 10 years, dragged from house to house without its spine ever being cracked. It was given to me by a friend when I was a teenager, and I cannot read it because when I do, I will finally understand my friend, and that scares me. Her life is a set of nesting dolls with something solid and simple at the center, and I do not know whether that thing is pure gold or sticky-dark tar-poison. Holding the book is holding my friend’s hand, and that is as close as I dare to get.
Jeff VanderMeer’s “City of Saints and Madmen” is an entire universe between two covers. It contains sculptures and mix tapes and skyscrapers and midwives and sacrifices, and everything else that exists in my own world, but with every edge crusted in gilt and mold. It will open my eyes to a new way of seeing, and when I finish it, I will somehow have been transformed from being just a person who writes into A Real Writer.
Anais Nin’s “Journals” will shatter my illusions and create new ones. Anais Nin is everything that I fear and hope that I am; when I read her “Journals,” she might be everything I think she is. This is thrilling and terrifying at the same time, because then I will be forced to emulate her life of complex heartaches, pearls and lace, all-day sleep, and absinthe-soaked dinner parties — and those things are just not practical. And, even more frightening, she may not be who I think she is. If she is not special, then no one can be special.
I am not ready for Françoise Sagan’s “Aimez-Vous Brahms.” At 18 I read her novella “Bonjour Tristesse” and I was transformed: This book held a truth I didn’t even know I knew. The protagonist of “Aimez-Vous Brahms” is 39, and so when I read it at 39 it will tell me the truth the same way that “Bonjour Tristesse” did when I was 18. Like 18, 39 is a the perfect meeting of anticipation and experience, and this book will guide me through into the next phase of my life.
I have not read these books because I worry that they’re not the books I think they are. I’m sure they are wonderful books, but no book could possibly contain all the knowledge and understanding I am expecting from these. Perhaps I will never read them. This is the same logic that means I will probably never visit Russia: I imagine that a trip to Russia will be the crux of my life. Every moment will be candy-colored buildings and strong coffee on silver platters, steam trains slipping past quaint farmhouses and huge black bears glimpsed through the snow, furred hats against my ears, and history seeping into my veins. I know that if I actually go to Russia, there will be moments where I don’t like the food, or my feet ache, or I can’t sleep, or I get annoyed at not being able to read the Cyrillic signs. If I keep it in my imagination, it stays pure and perfect.
There is another reason to leave books unread: because I know I really will love them. This might seem nonsensical, and I suppose it is. I am a writer, and I know that certain books will resonate deeply and perfectly because they are similar in some way to my own writing, though vastly better. This is why I have not read Alice Greenway’s “White Ghost Girls,” a short and lyrical novel about sisters in 1960s Hong Kong; or Francesca Lia Block’s fantastical erotica novellas, “Ecstasia” and “Primavera”; or Stewart O’Nan’s small-town ghost story, “The Night Country”; or anything ever written by Martin Millar.
I know that I will love them and want to learn from them, and so I don’t read them: firstly because it is tiring to read that way, with your eyes and ears and brain constantly absorbing; and secondly because once I read them they will be over, the mystery will be revealed. These books have affected my writing, and I haven’t even read them. Maybe we can learn as much from our expectations of a story as we can from the actual words on the page.
Try an experiment with me. It might seem odd at first, but go to your bookshelves and pick a book you have not read. Hold it in your hands. Look at the cover and read the description on the back. Think about what the story might be about, what themes might be in it, what it might say about the world you inhabit, whether it can make you imagine an entirely different world.
There is absolutely nothing wrong with that book. It might prove to be a great book, the best book you have ever read. But I suggest that the literary universe you have just created might be more exciting and enlightening than the one contained within those covers. Your imagination contains every possible story, every possible understanding, and any book can only be one tiny portion of that potential world.
Kirsty Logan is a fiction writer in Glasgow.
Electrified sand. Exploding balloons. The long and colorful history of weather manipulation.
This brutal winter has made sure that no one forgets who’s in charge. The snow doesn’t fall so much as fly. Cars stay buried, and feet stay wet. Ice is invisible, and every puddle is deeper than it looks. On the eve of each new storm, the citizenry engages in diligent preparations, rearranging travel plans, lining up baby sitters in case the schools are closed, and packing comfortable shoes for work so they’re not forced to spend all day wearing their awful snow boots.
One can’t help but feel a little embarrassed on behalf of the species, to have been involved in all this fuss over something as trivial as the weather. Is the human race not mighty? How are we still allowing ourselves, in the year 2011, to be reduced to such indignities by a bunch of soggy clouds?
It is not for lack of trying. It’s just that over the last 200 years, the clouds have proven an improbably resilient adversary, and the weather in general has resisted numerous well-funded — and often quite imaginative — attempts at manipulation by meteorologists, physicists, and assorted hobbyists. Some have tried to make it rain, while others have tried to make it stop. Balloons full of explosives have been sent into the sky, and large quantities of electrically charged sand have been dropped from airplanes. One enduring scheme is to disrupt and weaken hurricanes by spreading oil on the surface of the ocean. Another is to drive away rain by shooting clouds with silver iodide or dry ice, a practice that was famously implemented at the 2008 Olympics in Beijing and is frequently employed by farmers throughout the United States.
There’s something deeply and perennially appealing about the idea of controlling the weather, about deciding where rain should fall and when the sun should shine. But failing at it has been just as persistent a thread in the human experience. In a new book called “Fixing the Sky: The Checkered History of Weather and Climate Control,” Colby College historian of science James Rodger Fleming catalogs all the dreamers, fools, and pseudo-scientists who have devoted their lives to weather modification, tracing the delusions they shared and their remarkable range of motivations. Some wanted to create technology that would be of use to farmers, so that they would no longer have to operate at the mercy of unpredictable droughts. Others imagined scenarios in which the weather could be weaponized and used against foreign enemies. Still others had visions of utopia in which the world’s deserts were made fertile and every child was fed.
“Even some of the charlatans had meteorology books on their desks,” Fleming said last week. “Most had simple ideas: for instance, that hot air rises. These guys’ll have some sense of things, but they won’t have a complete theory of the weather system. They have a principle they fix on and then they try to build their scheme from there.”
What they underestimated, in Fleming’s view — what continues to stymie us all, whether we’re seeding clouds or just trying to plan for the next commute — is weather’s unfathomable complexity. And yet, the dream has stayed alive. Lately, the drive to fix the climate has taken the form of large-scale geoengineering projects designed to reverse the effects of global warming. Such projects — launching mirrors into space to reflect solar radiation away from the earth, for instance — are vastly more ambitious than anything a 19th-century rainmaker could have cooked up, and would employ much more sophisticated technology. What’s unclear, as one looks back at the history of weather modification research, is whether all that technology makes it any more likely that our ambitions will be realized, or if it just stands to make our failure that much more devastating.
The story of modern weather control in America, as Fleming tells it in “Fixing the Sky,” begins on a Wednesday morning in November of 1946, some 14,000 feet in the air above western Massachusetts. Sitting up there in a single-engine airplane was Vincent Schaefer, a 41-year-old scientist in the employ of the General Electric Research Laboratory, whose principal claim to fame up to that point was that he’d found a way to make plastic replicas of snowflakes. Prior to the flight, Schaefer had conducted an experiment that seemed to point toward a method for manipulating clouds using small bits of dry ice. If the technology could be exported from the lab and made to work in the real world, the potential applications would be limitless. With that in mind, flying over the Berkshires, Schaefer waited until his plane entered a suitable-looking cloud, then opened a window and let a total of six pounds of crushed dry ice out into the atmosphere. Before he knew it, “glinting crystals” of snow were falling obediently to the ground below.
GE announced the results of the demonstration the next day. “SNOWSTORM MANUFACTURED,” read the massive banner headline on the front page of The Boston Globe. The GE lab was deluged with letters and telegrams from people excited about the new technology for all sorts of reasons. One asked if it might be possible to get some artificial snow for use in an upcoming Christmas pageant. Another implored the company to aid a search-and-rescue effort at Mount Rainier by getting rid of some inconveniently located clouds. Hollywood producers inquired about doing up some blizzards for their movie sets. Separately, a state official from Kansas wrote to President Truman in hopes that GE’s snow-making technology could be used to help end a drought there. It seemed to be an all-purpose miracle, as though there was not a problem on earth it couldn’t fix.
Insofar as technological advancement in general is all about man’s triumph over his conditions, a victory over the weather is basically like beating the boss in a video game. And GE’s breakthrough came at a moment when the country was collectively keyed up on the transformative power of technology: World War II had just ended, and we suddenly had the bomb, radar, penicillin, and computers. But the results of Schaefer’s experiment would have inspired the same frenzied reaction in any era. “Think of it!” as one journalist wrote in a 1923 issue of Popular Science Monthly, about an earlier weather modification scheme, “Rain when you want it. Sunshine when you want it. Los Angeles weather in Pittsburgh and April showers for the arid deserts of the West. Man in control of the heavens — to turn on or shut them off as he wishes.”
It’s a longstanding, international fantasy — one that goes all the way back to ancient Greece, where watchmen stood guard over the skies and alerted their countrymen at the first sign of hail so that they might try to hold off the storm by quickly sacrificing some animals. The American tradition begins in the early 19th century, when the nation’s first official meteorologist, James “the Storm King” Espy, developed a theory of rainmaking that involved cutting down large swaths of forest and lighting them on fire. Espy had observed that volcanic eruptions were often followed by rainfall. He thought these fires would work the same way, causing hot air to rise into the atmosphere, cool, and thus produce precipitation.
For years he unsuccessfully sought funding from the government so that he might test his theory, describing in an 1845 open letter “To the Friends of Science” a proposal wherein 40-acre fires would be set every seven days at regular intervals along a 600-mile stretch of the Rocky Mountains. The result, he promised, would be regular rainfall that would not only ease the lives of farmers and make the country more productive but also eradicate the spread of disease and make extreme temperatures a thing of the past. He did not convince the friends of science, however, and lived out his distinguished career without ever realizing his vision.
Others had better luck winning hearts and minds. In 1871, a civil engineer from Chicago named Edward Powers published a book called “War and the Weather, or, The Artificial Production of Rain,” in which he argued that rainstorms were caused by loud noises and could be induced using explosives. He found a sympathetic collaborator in a Texas rancher and former Confederate general by the name of Daniel Ruggles, who believed strongly that all one had to do to stimulate rain was send balloons full of dynamite and gunpowder up into the sky and detonate them. Another adherent of this point of view was Robert Dyrenforth, a patent lawyer who actually succeeded in securing a federal grant to conduct a series of spectacular, but finally inconclusive, pyrotechnic experiments during the summer and fall of 1891.
A few decades after Dyrenforth’s methods were roundly discredited, an inventor named L. Francis Warren emerged with a new kind of theory. Warren, an autodidact who claimed to be a Harvard professor, believed that the trick to rainmaking wasn’t heat or noise, but electrically charged sand, which if sprinkled from the sky could not only produce rain but also break up clouds. His endgame was a squad of airplanes equipped to stop droughts, clear fog, and put out fires. It was Warren’s scheme that inspired that breathless Popular Science article, but after multiple inconclusive tests — including some funded by the US military — it lost momentum and faded away.
For the next 50 years, charlatans and snake-oil salesmen inspired by Warren, Dyrenforth, and the rest of them went around the country hawking weather control technologies that had no basis whatsoever in science. It wasn’t until after World War II, with the emergence of GE’s apparent success dropping dry ice into clouds, that the American public once again had a credible weather control scheme to get excited about. Once that happened, though, it was off to the races, and by the 1950s, commercial cloud seeding — first with dry ice, then with silver iodide — was taking place over an estimated 10 percent of US land. By the end of the decade, it was conventional wisdom that achieving mastery over the weather would be a decisive factor in the outcome of the Cold War. In 1971, it was reported that the United States had secretly used cloud-seeding to induce rain over the Ho Chi Minh Trail in hopes of disrupting enemy troop movements. In 1986, the Soviet Union is said to have used it to protect Moscow from radioactivity emanating from the Chernobyl site by steering toxic clouds to Belarus and artificially bursting them. There’s no way to know whether the seeding operation actually accomplished anything, but people in Belarus to this day hold the Kremlin in contempt for its clandestine attempt to stick them with the fallout.
You’d think, given mankind’s record of unflappable ingenuity, we would have had weather figured out by now. But after decades of dedicated experimentation and untold millions of dollars invested, the world is still dealing with droughts, floods, and 18-foot urban snowbanks. What is making this so difficult? Why is it that the best we can do when we learn of an approaching snowstorm is brace ourselves and hope our street gets properly plowed?
The problem is that weather conditions in any given place at any given time are a function of far too many independent, interacting variables. Whether it’s raining or snowing is never determined by any one overpowering force in the atmosphere: It’s always a complicated and unpredictable combination of many. Until we have the capability to micromanage the whole system, we will not be calling any shots.
Fleming, for his part, doesn’t believe that a single one of the weather modification schemes he describes in his book ever truly worked. Even cloud-seeding, he says, as widespread as it is even today, has never been scientifically proven to be effective. Kristine Harper, an assistant professor at Florida State University who is writing a book about the US government’s forays into weather modification, says that doesn’t necessarily mean cloud-seeding is a total waste of time, just that there’s no way to scientifically measure its impact.
“You’d be hard-pressed to find evidence even at this point that there’s a statistically significant difference between what you would get from a seeded cloud and an unseeded cloud,” she said.
The good news for practitioners of weather control is that amid all this complexity, they can convince themselves and others that they deserve credit for weather patterns they have probably had no role whatsoever in conjuring. The bad news for anyone who’d like to prevent the next 2-foot snow dump — or the next 2 degrees of global warming — is that there’s just no way to know. As Fleming’s account of the last 200 years suggests, it may be possible to achieve a certain amount by intervention. But it’s a long way from anything you could call control. Those who insist on continuing to shake their fists at the sky should make sure they have some warm gloves.
Leon Neyfakh is the staff writer for Ideas.
Full article and photo:
Full article and photo:
The case that human athletes have reached their limits
Last summer, David Oliver tried to become one of the fastest men in the world. The American Olympic hurdler had run a time of 12.89 seconds in the 110 meters at a meet in Paris in July. The time was two-100ths of a second off the world record, 12.87, owned by Cuba’s Dayron Robles, a mark as impressive as it was absurd. Most elite hurdlers never break 13 seconds. Heck, Oliver seldom broke 13. He’d spent the majority of his career whittling down times from the 13.3 range. But the summer of 2010 was special. Oliver had become that strange athlete whose performance finally equaled his ambition and who, as a result, competed against many, sure, but was really only going against himself.
In Paris, for instance, Oliver didn’t talk about how he won the race, or even the men he ran against. He talked instead about how he came out of the blocks, how his hips dipped precariously low after clearing the sixth hurdle, how he planned to remain focused on the season ahead. For him, the time — Robles’s time — was what mattered. He had a blog and called it: “Mission: 12.85: The race for the record.” And on this blog, after Paris, Oliver wrote, “I am in a great groove right now and I can’t really pinpoint what set it off….Whatever groove I’m in, I hope I never come out of it!”
The next week, he had a meet in Monaco. The press billed it as Oliver’s attempt to smash the world record. But he had a terrible start — “ran the [worst] first couple of hurdles of the season,” as he would later write. Oliver won the race, but with a time of 13.01.
On his blog, Oliver did his best to celebrate; he titled his Monaco post, “I’m sitting on top of the world.” (And why not? The man had, after all, beaten the planet’s best hurdlers for the second-straight week, almost all of whom he’d see at the 2012 Olympics.) But the post grew defensive near the end. He reasoned that his best times should improve.
But they haven’t. That meet in Paris was the fastest he’s ever run.
Two recent, provocative studies hint at why. That Oliver has not broken Robles’s record has nothing to do with an unfortunate stumble out of the blocks or imperfect technique. It has everything to do with biology. In the sports that best measure athleticism — track and field, mostly — athletic performance has peaked. The studies show the steady progress of athletic achievement through the first half of the 20th century, and into the latter half, and always the world-record times fall. Then, suddenly, achievement flatlines. These days, athletes’ best sprints, best jumps, best throws — many of them happened years ago, sometimes a generation ago.
“We’re reaching our biological limits,” said Geoffroy Berthelot, one of the coauthors of both studies and a research specialist at the Institute for Biomedical Research and Sports Epidemiology in Paris. “We made major performance increases in the last century. And now it is very hard.”
Berthelot speaks with the bemused detachment of a French existentialist. What he predicts for the future of sport is just as indifferent, especially for the people who enjoy it: a great stagnation, reaching every event where singular athleticism is celebrated, for the rest of fans’ lives. And yet reading Berthelot’s work is not grim, not necessarily anyway. It is oddly absorbing. The implicit question that his work poses is larger than track and field, or swimming, or even sport itself. Do we dare to acknowledge our limitations? And what happens once we do?
It’s such a strange thought, antithetical to the more-more-more of American ideals. But it couldn’t be more relevant to Americans today.
In the early 1950s, the scientific community thought Roger Bannister’s attempt to break the four-minute mile might result in his death. Many scholars were certain of the limits of human achievement. If Bannister didn’t die, the thinking went, he might lose a limb. Or if no physiological barrier existed, surely a mathematical one did. The idea of one minute for one lap, and four minutes for four, imposed a beautiful, eerie symmetry — besting it seemed like an ugly distortion, and, hence, an impossibility. But Bannister broke the four-minute mark in 1954, and within three years 30 others had done it. Limitations, it seemed, existed only in the mind.
Except when they don’t. Geoffroy Berthelot began looking at track and field and swimming records in 2007. These were the sports that quantified the otherwise subjective idea of athleticism. There are no teammates in these sports, and improvement is marked scientifically, with a stopwatch or tape measure. In almost every other game, even stat-heavy games, athletic progression can’t be measured, because teammates and opponents temper results. What is achieved on these playing fields, then, doesn’t represent — can’t represent — the totality of achievement: Was Kareem Adbul-Jabbar a better basketball player than Michael Jordan because Abdul-Jabbar scored more career points? Or was Wilt Chamberlain better than them both because he scored 100 in a game? And where does this leave Bill Russell, who won more championships than anybody? By contrast, track and field and swimming are pure, the sporting world’s equivalent of a laboratory.
Berthelot wanted to know more about the progression of athletic feats over time in these sports, how and why performance improved in the modern Olympic era. So he plotted it out, every world record from 1896 onward. When placed on a L-shaped graph, the record times fell consistently, as if down a gently sloped hill. They fell because of improving nutritional standards, strength and conditioning programs, and the perfection of technique. But once Berthelot’s L-shaped graphs reached the 1980s, something strange happened: Those gently sloping hills leveled into plains. In event after event, record times began to hold.
The trend continued through the 1990s, and into the last decade. Today 64 percent of track and field world records have stood since 1993. One world record, the women’s 1,500 meters, hasn’t been broken since 1980. When Berthelot published his study last year in the online journal PLoS One, he made the simple but bold argument that athletic performance had peaked. On the whole, Berthelot said, the pinnacle of athletic achievement was achieved around 1988. We’ve been watching a virtual stasis ever since.
Berthelot argues that performance plateaued for the same reasons it improved over all those decades. Or, put another way, because it improved over all those decades. Records used to stand because some athletes were not well nourished. And then ubiquitous nutritional standards developed, and records fell. Records used to stand because athletes had idiosyncratic forms and techniques. And then through an evolution of experimentation — think high jumper Dick Fosbury and his Fosbury Flop — the best practices were codified and perfected, and now a conformity of form rules sport. Records used to stand because only a minority of athletes lifted weights and conditioned properly. Here, at least, the reasoning is a bit more complicated. Now everybody is ripped, yes, but what strength training also introduced was steroid use. Berthelot doesn’t name names, but he wonders how many of today’s records stand because of pharmacological help, the records broken during an era of primitive testing, before a foundation established the World Anti-Doping Agency in 1999. (This assumes, of course, that WADA catches everything these days. And it probably doesn’t.)
Berthelot isn’t the only one arguing athletic limitation. Greg Whyte is a former English Olympic pentathlete, now a renowned trainer in the United Kingdom and academic at the University of Wolverhampton, who, in 2005, coauthored a study published in the journal Medicine and Science in Sports and Exercise. The study found that athletes in track and field’s distance events were nearing their physiological limits. When reached by phone recently and asked about the broader scope of Berthelot’s study, Whyte said, “I think Geoffroy’s right on it.” In fact, Whyte had just visited Berthelot in Paris. The two hope to collaborate in the future.
It’s a convincing case Berthelot presents, but for one unaccounted fact: What to do with Usain Bolt? The Jamaican keeps torching 100- and 200-meter times, is seemingly beyond-human in some of his races and at the very least the apotheosis of progression. How do you solve a problem like Usain Bolt?
“Bolt is a very particular case,” Berthelot said. Only five track and field world records have been broken since 2008. Bolt holds — or contributed to — three of them: the 100 meters, 200 meters, and 4×100-meter relay. “All the media focus on Usain Bolt because he’s the only one who’s progressing today,” Berthelot said. He may also be the last to progress.
Another Berthelot paper, published in 2008, predicts that the end of almost all athletic improvement will occur around 2027. By that year, if current trends hold — and for Berthelot, there’s little doubt that they will — the “human species’ physiological frontiers will be reached,” he writes. To the extent that world records are still vulnerable by then, they will be improved by no more than 0.05 percent — so marginal that the fans, Berthelot reasons, will likely fail to care.
Maybe the same can be said of the athletes. Berthelot notes how our culture asks them — and in fact elite athletes expect of themselves — to always grow bigger, be stronger, go faster. But what happens when that progression stops? Or, put another way: What happens if it stopped 20 years ago? Why go on? The fame is quadrennial. The money’s not great. (Not for nothing: Usain Bolt said recently he’ll go through another Olympic cycle and then switch to pro football.) The pressure is excruciating, said Dr. Alan Goldberg, a sports psychologist who has worked with Olympic athletes, especially if they’re competing at a level where breaking a record is a possibility.
In a different sport but the same context, another individual performer, Ted Williams, looked back on his career and said, in the book “My Turn at Bat,” “I’m glad it’s over…I wouldn’t go back to being eighteen or nineteen years old knowing what was in store, the sourness and bitterness, knowing how I thought the weight of the damn world was always on my neck, grinding on me. I wouldn’t go back to that for anything.” Remember, this is from a man who succeeded, who, most important, broke records. What happens to the athlete who knows there are no records left to break? What happens when you acknowledge your limitations?
The short answer is, you create what you did not previously have. Swimming records, for instance, followed the same trend as track and field: a stasis beginning roughly in the mid-1980s. But in 2000, the sport innovated its way out of its torpor. The famous full-body LZR suits hit the scene, developed with NASA technologies and polyurethane, promising to reduce swimmers’ drag in the water. World records fell so quickly and so often that they became banal, the aquatic version of Barry Bonds hitting a home run. Since 2000, all but four of swimming’s records have been broken, many of them multiple times.
But in 2009 swimming’s governing body, FINA, banned the full-body LZR suits. FINA did not, however, ban the knee-to-navel suits men had previously worn, or the shoulder-to-knee suits women preferred. These suits were made of textiles or other, woven materials. In other words, FINA acknowledged the need for technological enhancements, even as it banned the LZR suits. As a result, world records still fall. Last month in Dubai, American Ryan Lochte set one in the 400-meter individual medley.
These ancient sports are a lot like the world’s current leading economies: stagnant, and looking for a way to break through. The best in both worlds do so by innovating, improving the available resources, and when that process exhausts itself, creating new ones. However, this process — whether through an increasing reliance on computers, or NASA-designed swimsuits, or steroids that regulators can’t detect — changes the work we once loved, or the sports we once played, or the athletes we once cheered.
It may not always be for the worse, but one thing is certain. When we address our human limits these days, we actually become less human.
Paul Kix is a senior editor at Boston magazine and a contributing writer for ESPN the Magazine.
Full article and photo:
Fran Lebowitz, New York Times
Full article and photo:
Full article and photo:
Scientists are finally succeeding where so many men have failed: in understanding why women find some guys handsome and others hideous
WHEN it comes to partners, men often find women’s taste fickle and unfathomable. But ladies may not be entirely to blame. A growing body of research suggests that their preference for certain types of male physiognomy may be swayed by things beyond their conscious control—like prevalence of disease or crime—and in predictable ways.
Masculine features—a big jaw, say, or a prominent brow—tend to reflect physical and behavioural traits, such as strength and aggression. They are also closely linked to physiological ones, like virility and a sturdy immune system.
The obverse of these desirable characteristics looks less appealing. Aggression is fine when directed at external threats, less so when it spills over onto the hearth. Sexual prowess ensures plenty of progeny, but it often goes hand in hand with promiscuity and a tendency to shirk parental duties or leave the mother altogether.
So, whenever a woman has to choose a mate, she must decide whether to place a premium on the hunk’s choicer genes or the wimp’s love and care. Lisa DeBruine, of the University of Aberdeen, believes that today’s women still face this dilemma and that their choices are affected by unconscious factors.
In a paper published earlier this year Dr DeBruine found that women in countries with poor health statistics preferred men with masculine features more than those who lived in healthier societies. Where disease is rife, this seemed to imply, giving birth to healthy offspring trumps having a man stick around long enough to help care for it. In more salubrious climes, therefore, wimps are in with a chance.
Now, though, researchers led by Robert Brooks, of the University of New South Wales, have taken another look at Dr DeBruine’s data and arrived at a different conclusion. They present their findings in the Proceedings of the Royal Society. Dr Brooks suggests that it is not health-related factors, but rather competition and violence among men that best explain a woman’s penchant for manliness. The more rough-and-tumble the environment, the researcher’s argument goes, the more women prefer masculine men, because they are better than the softer types at providing for mothers and their offspring.
Since violent competition for resources is more pronounced in unequal societies, Dr Brooks predicted that women would value masculinity more highly in countries with a higher Gini coefficient, which is a measure of income inequality. And indeed, he found that this was better than a country’s health statistics at predicting the relative attractiveness of hunky faces.
The rub is that unequal countries also tend to be less healthy. So, in order to disentangle cause from effect, Dr Brooks compared Dr DeBruine’s health index with a measure of violence in a country: its murder rate. Again, he found that his chosen indicator predicts preference for facial masculinity more accurately than the health figures do (though less well than the Gini).
However, in a rejoinder published in the same issue of the Proceedings, Dr DeBruine and her colleagues point to a flaw in Dr Brooks’s analysis: his failure to take into account a society’s overall wealth. When she performed the statistical tests again, this time controlling for GNP, it turned out that the murder rate’s predictive power disappears, whereas that of the health indicators persists. In other words, the prevalence of violent crime seems to predict mating preferences only in so far as it reflects a country’s relative penury.
The statistical tussle shows the difficulty of drawing firm conclusions from correlations alone. Dr DeBruine and Dr Brooks admit as much, and agree the dispute will not be settled until the factors that shape mating preferences are tested directly.
Another recent study by Dr DeBruine and others has tried to do just that. Its results lend further credence to the health hypothesis. This time, the researchers asked 124 women and 117 men to rate 15 pairs of male faces and 15 pairs of female ones for attractiveness. Each pair of images depicted the same set of features tweaked to make one appear ever so slightly manlier than the other (if the face was male) or more feminine (if it was female). Some were also made almost imperceptibly lopsided. Symmetry, too, indicates a mate’s quality because in harsh environments robust genes are needed to ensure even bodily development.
Next, the participants were shown another set of images, depicting objects that elicit varying degrees of disgust, such as a white cloth either stained with what looked like a bodily fluid, or a less revolting blue dye. Disgust is widely assumed to be another adaptation, one that warns humans to stay well away from places where germs and other pathogens may be lurking. So, according to Dr DeBruine’s hypothesis, people shown the more disgusting pictures ought to respond with an increased preference for masculine lads and feminine lasses, and for the more symmetrical countenances.
That is precisely what happened when they were asked to rate the same set of faces one more time. But it only worked with the opposite sex; the revolting images failed to alter what either men or women found attractive about their own sex. This means sexual selection, not other evolutionary mechanisms, is probably at work.
More research is needed to confirm these observations and to see whether other factors, like witnessing violence, bear on human physiognomic proclivities. For now, though, the majority of males who do not resemble Brad Pitt may at least take comfort that this matters less if their surroundings remain spotless.
Full article and photo:
Are they religious fanatics? Deluded ideologues? New research suggests something more mundane: They just want to commit suicide.
Qari Sami did something strange the day he killed himself. The university student from Kabul had long since grown a bushy, Taliban-style beard and favored the baggy tunics and trousers of the terrorists he idolized. He had even talked of waging jihad. But on the day in 2005 that he strapped the bomb to his chest and walked into the crowded Kabul Internet cafe, Sami kept walking — between the rows of tables, beyond the crowd, along the back wall, until he was in the bathroom, with the door closed.
And that is where, alone, he set off his bomb.
The blast killed a customer and a United Nations worker, and injured five more. But the carnage could have been far worse. Brian Williams, an associate professor of Islamic studies at the University of Massachusetts Dartmouth, was in Afghanistan at the time. One day after the attack, he stood before the cafe’s hollowed-out wreckage and wondered why any suicide bomber would do what Sami had done: deliberately walk away from the target before setting off the explosives. “[Sami] was the one that got me thinking about the state of mind of these guys,” Williams said.
Eventually a fuller portrait emerged. Sami was a young man who kept to himself, a brooder. He was upset by the US forces’ ouster of the Taliban in the months following 9/11 — but mostly Sami was just upset. He took antidepressants daily. One of Sami’s few friends told the media he was “depressed.”
Today Williams thinks that Sami never really cared for martyrdom; more likely, he was suicidal. “That’s why he went to the bathroom,” Williams said.
The traditional view of suicide bombers is well established, and backed by the scholars who study them. The bombers are, in the post-9/11 age, often young, ideologically driven men and women who hate the laissez-faire norms of the West — or at least the occupations and wars of the United States — because they contradict the fundamentalist interpretations that animate the bombers’ worldview. Their deaths are a statement, then, as much as they are the final act of one’s faith; and as a statement they have been quite effective. They propagate future deaths, as terrorist organizers use a bomber’s martyrdom as propaganda for still more suicide terrorism.
But Williams is among a small cadre of scholars from across the world pushing the rather contentious idea that some suicide bombers may in fact be suicidal. At the forefront is the University of Alabama’s Adam Lankford, who recently published an analysis of suicide terrorism in the journal Aggression and Violent Behavior. Lankford cites Israeli scholars who interviewed would-be Palestinian suicide bombers. These scholars found that 40 percent of the terrorists showed suicidal tendencies; 13 percent had made previous suicide attempts, unrelated to terrorism. Lankford finds Palestinian and Chechen terrorists who are financially insolvent, recently divorced, or in debilitating health in the months prior to their attacks. A 9/11 hijacker, in his final note to his wife, describing how ashamed he is to have never lived up to her expectations. Terrorist recruiters admitting they look for the “sad guys” for martyrdom.
For Lankford and like-minded thinkers, changing the perception of the suicide bomber changes the focus of any mission that roots out terrorism. If the suicide bomber can be viewed as something more than a brainwashed, religiously fervent automaton, anticipating a paradise of virgins in the clouds, then that suicide bomber can be seen as a nuanced person, encouraging a greater curiosity about the terrorist, Lankford thinks. The more the terrorist is understood, the less damage the terrorist can cause.
“Changing perceptions can save lives,” Lankford said.
Islam forbids suicide. Of the world’s three Abrahamic faiths, “The Koran has the only scriptural prohibition against it,” said Robert Pape, a professor at the University of Chicago who specializes in the causes of suicide terrorism. The phrase suicide bomber itself is a Western conception, and a pretty foul one at that: an egregious misnomer in the eyes of Muslims, especially from the Middle East. For the Koran distinguishes between suicide and, as the book says, “the type of man who gives his life to earn the pleasure of Allah.” The latter is a courageous Fedayeen — a martyr. Suicide is a problem, but martyrdom is not.
For roughly 1,400 years, since the time of the Prophet Muhammad, scholars have accepted not only the ubiquity of martyrdom in the Muslim world but the strict adherence to its principles by those who participate in it: A lot of people have died, and keep dying, for a cause. Only recently, and sometimes only reluctantly, has the why of martyrdom been challenged.
Ariel Merari is a retired professor of psychology at Tel Aviv University. After the Beirut barracks bombing in 1983 — in which a terrorist, Ismalal Ascari, drove a truck bomb into a United States Marine barracks, killing 241 American servicemen — Merari began investigating the motives of Ascari, and the terrorist group with which the attack was aligned, Hezbollah. Though the bombing came during the Lebanese Civil War, Merari wondered whether it was less a battle within the conflict so much as a means chosen by one man, Ascari, to end his life. By 1990, Merari had published a paper asking the rest of academia to consider if suicide bombers were actually suicidal. “But this was pretty much speculative, this paper,” Merari said.
In 2002, he approached a group of 15 would-be suicide bombers — Palestinians arrested and detained moments before their attacks — and asked if he could interview them. Remarkably, they agreed. “Nobody” — no scholar — “had ever been able to do something like this,” Merari said. He also approached 14 detained terrorist organizers. Some of the organizers had university degrees and were intrigued by the fact that Merari wanted to understand them. They, too, agreed to be interviewed. Merari was ecstatic.
Fifty-three percent of the would-be bombers showed “depressive tendencies” — melancholy, low energy, tearfulness, the study found — whereas 21 percent of the organizers exhibited the same. Furthermore, 40 percent of the would-be suicide bombers expressed suicidal tendencies; one talked openly of slitting his wrists after his father died. But the study found that none of the terrorist organizers were suicidal.
The paper was published last year in the journal Terrorism and Political Violence. Adam Lankford read it in his office at the University of Alabama. The results confirmed what he’d been thinking. The criminal justice professor had published a book, “Human Killing Machines,” about the indoctrination of ordinary people as agents for terrorism or genocide. Merari’s paper touched on themes he’d explored in his book, but the paper also gave weight to the airy speculation Lankford had heard a few years earlier in Washington, D.C., while he was earning his PhD from American University. There, Lankford had helped coordinate antiterrorism forums with the State Department for high-ranking military and security personnel. And it was at these forums, from Third World-country delegates, that Lankford first began to hear accounts of suicide bombers who may have had more than martyrdom on their minds. “That’s what sparked my interest,” he said.
He began an analysis of the burgeoning, post-9/11 literature on suicide terrorism, poring over the studies that inform the thinking on the topic. Lankford’s paper was published this July. In it, he found stories similar to Merari’s: bombers who unwittingly revealed suicidal tendencies in, say, their martyrdom videos, recorded moments before the attack; and organizers who valued their lives too much to end it, so they recruited others, often from the poorest, bleakest villages.
But despite the accounts from their own published papers, scholar after scholar had dismissed the idea of suicidality among bombers. Lankford remains incredulous. “This close-mindedness has become a major barrier to scholarly progress,” Lankford said.
Not everyone is swayed by his argument. Mia Bloom is a fellow at the International Center for the Study of Terrorism at Penn State University and the author of the book, “Dying to Kill: The Allure of Suicide Terror.” “I would be hesitant to agree with Mr. Lankford,” she said. “You don’t want to conflate the Western ideas of suicide with something that is, in the Middle East, a religious ceremony.” For her, “being a little bit wistful” during a martyrdom video is not an otherwise hidden window into a bomber’s mind. Besides, most suicide bombers “are almost euphoric” in their videos, she said. “Because they know that before the first drop of blood hits the ground, they’re going to be with Allah.” (Lankford counters that euphoria, moments before one’s death, can also be a symptom of the suicidal person.)
One study in the academic literature directly refutes Lankford’s claim, and that’s the University of Nottingham’s Ellen Townsend’s “Suicide Terrorists: Are They Suicidal?” published in the journal Suicide and Life Threatening Behavior in 2007. (The answer is a resounding “no.”)
Townsend’s paper was an analysis of empirical research on suicide terrorism — the scholars who’d talked with the people who knew the attackers. In Lankford’s own paper a few years after Townsend’s, he attacked her methodology: relying as she did on the accounts of a martyr’s family members and friends, who, Lankford wrote, “may lie to protect the ‘heroic’ reputations of their loved ones.”
When reached by phone, Townsend had a wry chuckle for Lankford’s “strident” criticism of her work. Yes, in the hierarchy of empirical research, the sort of interviews on which her paper is based have weaknesses: A scholar can’t observe everything, can’t control for all biases. “But that’s still stronger evidence than the anecdotes in Lankford’s paper,” Townsend said.
Robert Pape, at the University of Chicago, agrees. “The reason Merari’s view” — and by extension, Lankford’s — “is so widely discredited is that we have a handful of incidents of what looks like suicide and we have over 2,500 suicide attackers. We have literally hundreds and hundreds of stories where religion is a factor — and revenge, too….To put his idea forward, [Lankford] would need to have a 100 or more stories or anecdotes to even get in the game.”
He’s working on that. Lankford’s forthcoming study, to be published early next year, is “far more robust” than his first: a list of more than 75 suicide terrorists and why they were likely suicidal. He cites a Palestinian woman who, five months after lighting herself on fire in her parents’ kitchen, attempted a return to the hospital that saved her life. But this time she approached with a pack of bombs wrapped around her body, working as an “ideologue” in the service of the al-Aqsa Martyrs Brigade.
Lankford writes of al Qaeda-backed terrorists in Iraq who would target and rape local women, and then see to it that the victims were sent to Samira Ahmed Jassim. Jassim would convince these traumatized women that the only way to escape public scorn was martyrdom. She was so successful she became known as the Mother of Believers. “If you just needed true believers, you wouldn’t need them to be raped first,” Lankford said in an interview.
Lankford is also intrigued by the man who in some sense launched the current study of suicide terrorism: Mohammed Atta, the ringleader behind the 9/11 hijacking. “It’s overwhelming, his traits of suicidality,” Lankford said. An isolated, neglected childhood, pathologically ashamed of any sexual expression. “According to the National Institute of Mental Health there are 11 signs, 11 traits and symptoms for a man being depressed,” Lankford said. “Atta exhibited eight of them.”
If Atta were seen as something more than a martyr, or rather something other than one, the next Atta would not have the same effect on the world. That’s Lankford’s hope anyway. But transporting a line of thought from the halls of academia to the chambers of Congress or onto field agents’ dossiers is no easy task. Lankford said he has not heard from anyone in the government regarding his work. And even if the idea does reach a broader audience in the West, there is still the problem of convincing those in the Middle East of its import. Pape, at the University of Chicago, said people in the Muslim world commit suicide at half the rate they do in the Jewish or Christian world. The act is scorned, which makes it all the more difficult to accept any behaviors or recurring thoughts that might lead to it.
Still, there is reason for Lankford to remain hopeful. The Israeli government, for one, has worked closely with Merari and his work on suicidal tendencies among Palestinian terrorists. Then there is Iraq. Iraq is on the verge of autonomy for many reasons, but one of them is the United States’ decision to work with Iraqis instead of against them — and, more fundamentally, to understand them. Lankford thinks that if the same inquisitiveness were applied to suicide bombers and their motives, “the violence should decrease.”
Paul Kix is a senior editor at Boston magazine and a contributing writer for ESPN the Magazine.
Full article and photo:
In my two years working in the president’s office at Harvard, before I was laid off in spring, I gave myself the job of steward of her books. Gift books would arrive in the mail, or from campus visitors, or from her hosts when she traveled; books by Harvard professors were kept on display in reception or in storage at our Massachusetts Hall office; books flowed in from publishers, or authors seeking blurbs, or self-published authors of no reputation or achievement, who sometimes sent no more than loosely bound manuscripts.
I took charge of the president’s books because it was my assigned job to write thank-you letters for them. I would send her the books and the unsigned draft replies on presidential letterhead; for each one, she sent me back the signed letter and, most of the time, the book, meaning she had no further use for it. Some books she would keep, but seldom for very long, which meant those came back to me too, in one of the smaller offices on the third floor of Mass Hall where there was no room to put them. Furthermore they weren’t so easily disposed of. Often they bore inscriptions, to president Drew Faust or to her and her husband from people they knew; and even if the volume was something rather less exalted — a professor from India sending his management tome or a book of Hindi poems addressed, mysteriously, to “Sir” or to the “vice-chancellor of Harvard University” — these books obviously couldn’t end up in a secondhand bookshop or charity bin or anywhere they could cause embarrassment. All were soon moved to an overflow space at the very end of the hall, coincidentally looking out at a donation bin for books at a church across the street.
One might feel depressed sitting amid so many unwanted books — so much unread knowledge and overlooked experience — but tending president Faust’s books became my favorite part of the job. No one noticed or interfered in what I did, which in a president’s office like Harvard, where everything is scrutinized, is uncommon. Even a thank-you note can say too much. I developed my own phrase for these notes — “I look forward to spending some time with it” — as a substitute for saying “I look forward to reading it,” because the president can’t possibly read all the books she receives, and there was always the chance she would run into the author somewhere, who might ask if she’d read his book yet.
Any Harvard president attracts books from supplicants, and this particular president attracted her own subcategory. Many books came from publishers or authors not at all shy about requesting a presidential blurb. These were easy to decline, and became easy to decline even when they came from the president’s friends, colleagues, acquaintances, neighbors, and others met over a distinguished career as a Civil War historian. This was the subcategory: Thanks to her specialty, we were building up a large collection of Civil War books, galleys and unpublished manuscripts — not just professional monographs, but amateurish family or local histories. These soon filled the overflow space in Massachusetts Hall, where water leaking from the roof during the unusual March rainstorms resulted in our having to discard several.
For everyone who sent us a book, the signed note back from the president mattered more than the book itself; both sides presumably understood that the president could buy or obtain any book she actually needed. The replies were signed by her — no auto-pen — which meant that even if she didn’t quite read your book the president still held it in her hands even for a moment, perhaps scribbling something at the bottom of her note with a fine black pen, or crossing out the “Ms” or “Professor” heading and substituting the author’s first name.
I had all kinds of plans for these books. The inscribed books we had to keep, of course, no matter how dire or dreadful. (The archives would want its pick of them anyway, deciding which books would become keepsakes of this particular era at Harvard.) But the many good titles that remained could go to struggling small foreign universities or schools, to our soldiers and Marines overseas, or to local libraries as an act of goodwill from a powerful and oft-maligned neighbor. They could go to the Allston branch of the Boston Public Library, for instance, perhaps to be dubbed “the president’s collection,” with its own shelving but freely available to Allstonians to read or borrow.
None of these ideas came to fruition. All of them would have required me to rise to a realm where I was no longer in charge — indeed, where I didn’t have a foothold. I would have to call meetings, bring bigger players to the table. Harvard’s top bureaucracy is actually quite small, and most of it was, literally, in my immediate presence: Two doors to the left was one vice president, two doors to the right, around a tight corner, was another. But these were big-gesture folks alongside a resolutely small-gesture one (me), and without an intermediary to help build support for my ideas my books weren’t going anywhere except, once, into a cardboard box outside my office just before Christmas, where I encouraged staff to help themselves and perhaps two dozen books, or half what I started the box with, went out that way.
In all this, the important thing was that books were objects to be honored, not treated as tiresome throwaways, and that everyone in the building knew this. Books are how, traditionally, universities are built: John Harvard was not the founder of Harvard University but a clergyman who, two years after its founding, bequeathed it his library. I used to joke that the most boring book in our collection was the volume called the “Prince Takamado Trophy All Japan Inter-Middle School English Oratorical Contest,” but if I hear it isn’t still on a shelf somewhere in Mass Hall 20 years from now, I won’t be the only one who’s disappointed.
Eric Weinberger has reviewed books in the Globe since 2000, and taught writing at Harvard for 10 years.
After 500 years of Western predominance, Niall Ferguson argues, the world is tilting back to the East.
“We are the masters now.” I wonder if President Barack Obama saw those words in the thought bubble over the head of his Chinese counterpart, Hu Jintao, at the G20 summit in Seoul last week. If the president was hoping for change he could believe in—in China’s currency policy, that is—all he got was small change. Maybe Treasury Secretary Timothy Geithner also heard “We are the masters now” as the Chinese shot down his proposal for capping imbalances in global current accounts. Federal Reserve Chairman Ben Bernanke got the same treatment when he announced a new round of “quantitative easing” to try to jump start the U.S. economy, a move described by one leading Chinese commentator as “uncontrolled” and “irresponsible.”
“We are the masters now.” That was certainly the refrain that I kept hearing in my head when I was in China two weeks ago. It wasn’t so much the glitzy, Olympic-quality party I attended in the Tai Miao Temple, next to the Forbidden City, that made this impression. The displays of bell ringing, martial arts and all-girl drumming are the kind of thing that Western visitors expect. It was the understated but unmistakable self-confidence of the economists I met that told me something had changed in relations between China and the West.
One of them, Cheng Siwei, explained over dinner China’s plan to become a leader in green energy technology. Between swigs of rice wine, Xia Bin, an adviser to the People’s Bank of China, outlined the need for a thorough privatization program, “including even the Great Hall of the People.” And in faultless English, David Li of Tsinghua University confessed his dissatisfaction with the quality of Chinese Ph.D.s.
You could not ask for smarter people with whom to discuss the two most interesting questions in economic history today: Why did the West come to dominate not only China but the rest of the world in the five centuries after the Forbidden City was built? And is that period of Western dominance now finally coming to an end?
In a brilliant paper that has yet to be published in English, Mr. Li and his co-author Guan Hanhui demolish the fashionable view that China was economically neck-and-neck with the West until as recently as 1800. Per capita gross domestic product, they show, stagnated in the Ming era (1402-1626) and was significantly lower than that of pre-industrial Britain. China still had an overwhelmingly agricultural economy, with low-productivity cultivation accounting for 90% of GDP. And for a century after 1520, the Chinese national savings rate was actually negative. There was no capital accumulation in late Ming China; rather the opposite.
The story of what Kenneth Pomeranz, a history professor at the University of California, Irvine, has called “the Great Divergence” between East and West began much earlier. Even the late economist Angus Maddison may have been over-optimistic when he argued that in 1700 the average inhabitant of China was probably slightly better off than the average inhabitant of the future United States. Mr. Maddison was closer to the mark when he estimated that, in 1600, per capita GDP in Britain was already 60% higher than in China.
For the next several hundred years, China continued to stagnate and, in the 20th century, even to retreat, while the English-speaking world, closely followed by northwestern Europe, surged ahead. By 1820 U.S. per capita GDP was twice that of China; by 1870 it was nearly five times greater; by 1913 the ratio was nearly 10 to one.
Despite the painful interruption of the Great Depression, the U.S. suffered nothing so devastating as China’s wretched mid-20th century ordeal of revolution, civil war, Japanese invasion, more revolution, man-made famine and yet more (“cultural”) revolution. In 1968 the average American was 33 times richer than the average Chinese, using figures calculated on the basis of purchasing power parity (allowing for the different costs of living in the two countries). Calculated in current dollar terms, the differential at its peak was more like 70 to 1.
This was the ultimate global imbalance, the result of centuries of economic and political divergence. How did it come about? And is it over?
As I’ve researched my forthcoming book over the past two years, I’ve concluded that the West developed six “killer applications” that “the Rest” lacked. These were:
• Competition: Europe was politically fragmented, and within each monarchy or republic there were multiple competing corporate entities.
• The Scientific Revolution: All the major 17th-century breakthroughs in mathematics, astronomy, physics, chemistry and biology happened in Western Europe.
• The rule of law and representative government: This optimal system of social and political order emerged in the English-speaking world, based on property rights and the representation of property owners in elected legislatures.
• Modern medicine: All the major 19th- and 20th-century advances in health care, including the control of tropical diseases, were made by Western Europeans and North Americans.
• The consumer society: The Industrial Revolution took place where there was both a supply of productivity-enhancing technologies and a demand for more, better and cheaper goods, beginning with cotton garments.
• The work ethic: Westerners were the first people in the world to combine more extensive and intensive labor with higher savings rates, permitting sustained capital accumulation.
Those six killer apps were the key to Western ascendancy. The story of our time, which can be traced back to the reign of the Meiji Emperor in Japan (1867-1912), is that the Rest finally began to download them. It was far from a smooth process. The Japanese had no idea which elements of Western culture were the crucial ones, so they ended up copying everything, from Western clothes and hairstyles to the practice of colonizing foreign peoples. Unfortunately, they took up empire-building at precisely the moment when the costs of imperialism began to exceed the benefits. Other Asian powers—notably India—wasted decades on the erroneous premise that the socialist institutions pioneered in the Soviet Union were superior to the market-based institutions of the West.
Beginning in the 1950s, however, a growing band of East Asian countries followed Japan in mimicking the West’s industrial model, beginning with textiles and steel and moving up the value chain from there. The downloading of Western applications was now more selective. Competition and representative government did not figure much in Asian development, which instead focused on science, medicine, the consumer society and the work ethic (less Protestant than Max Weber had thought). Today Singapore is ranked third in the World Economic Forum’s assessment of competitiveness. Hong Kong is 11th, followed by Taiwan (13th), South Korea (22nd) and China (27th). This is roughly the order, historically, in which these countries Westernized their economies.
Today per capita GDP in China is 19% that of the U.S., compared with 4% when economic reform began just over 30 years ago. Hong Kong, Japan and Singapore were already there as early as 1950; Taiwan got there in 1970, and South Korea got there in 1975. According to the Conference Board, Singapore’s per capita GDP is now 21% higher than that of the U.S., Hong Kong’s is about the same, Japan’s and Taiwan’s are about 25% lower, and South Korea’s 36% lower. Only a foolhardy man would bet against China’s following the same trajectory in the decades ahead.
China’s has been the biggest and fastest of all the industrialization revolutions. In the space of 26 years, China’s GDP grew by a factor of 10. It took the U.K. 70 years after 1830 to grow by a factor of four. According to the International Monetary Fund, China’s share of global GDP (measured in current prices) will pass the 10% mark in 2013. Goldman Sachs continues to forecast that China will overtake the U.S. in terms of GDP in 2027, just as it recently overtook Japan.
But in some ways the Asian century has already arrived. China is on the brink of surpassing the American share of global manufacturing, having overtaken Germany and Japan in the past 10 years. China’s biggest city, Shanghai, already sits atop the ranks of the world’s megacities, with Mumbai right behind; no American city comes close.
Nothing is more certain to accelerate the shift of global economic power from West to East than the looming U.S. fiscal crisis. With a debt-to-revenue ratio of 312%, Greece is in dire straits already. But the debt-to-revenue ratio of the U.S. is 358%, according to Morgan Stanley. The Congressional Budget Office estimates that interest payments on the federal debt will rise from 9% of federal tax revenues to 20% in 2020, 36% in 2030 and 58% in 2040. Only America’s “exorbitant privilege” of being able to print the world’s premier reserve currency gives it breathing space. Yet this very privilege is under mounting attack from the Chinese government.
For many commentators, the resumption of quantitative easing by the Federal Reserve has appeared to spark a currency war between the U.S. and China. If the “Chinese don’t take actions” to end the manipulation of their currency, President Obama declared in New York in September, “we have other means of protecting U.S. interests.” The Chinese premier Wen Jiabao was quick to respond: “Do not work to pressure us on the renminbi rate…. Many of our exporting companies would have to close down, migrant workers would have to return to their villages. If China saw social and economic turbulence, then it would be a disaster for the world.”
Such exchanges are a form of pi ying xi, China’s traditional shadow puppet theater. In reality, today’s currency war is between “Chimerica”—as I’ve called the united economies of China and America—and the rest of the world. If the U.S. prints money while China effectively still pegs its currency to the dollar, both parties benefit. The losers are countries like Indonesia and Brazil, whose real trade-weighted exchange rates have appreciated since January 2008 by 18% and 17%, respectively.
But who now gains more from this partnership? With China’s output currently 20% above its pre-crisis level and that of the U.S. still 2% below, the answer seems clear. American policy-makers may utter the mantra that “they need us as much as we need them” and refer ominously to Lawrence Summers’s famous phrase about “mutually assured financial destruction.” But the Chinese already have a plan to reduce their dependence on dollar reserve accumulation and subsidized exports. It is a strategy not so much for world domination on the model of Western imperialism as for reestablishing China as the Middle Kingdom—the dominant tributary state in the Asia-Pacific region.
If I had to summarize China’s new grand strategy, I would do it, Chinese-style, as the Four “Mores”: Consume more, import more, invest abroad more and innovate more. In each case, a change of economic strategy pays a handsome geopolitical dividend.
By consuming more, China can reduce its trade surplus and, in the process, endear itself to its major trading partners, especially the other emerging markets. China recently overtook the U.S. as the world’s biggest automobile market (14 million sales a year, compared to 11 million), and its demand is projected to rise tenfold in the years ahead.
By 2035, according to the International Energy Agency, China will be using a fifth of all global energy, a 75% increase since 2008. It accounted for about 46% of global coal consumption in 2009, the World Coal Institute estimates, and consumes a similar share of the world’s aluminum, copper, nickel and zinc production. Last year China used twice as much crude steel as the European Union, United States and Japan combined.
Such figures translate into major gains for the exporters of these and other commodities. China is already Australia’s biggest export market, accounting for 22% of Australian exports in 2009. It buys 12% of Brazil’s exports and 10% of South Africa’s. It has also become a big purchaser of high-end manufactured goods from Japan and Germany. Once China was mainly an exporter of low-price manufactures. Now that it accounts for fully a fifth of global growth, it has become the most dynamic new market for other people’s stuff. And that wins friends.
The Chinese are justifiably nervous, however, about the vagaries of world commodity prices. How could they feel otherwise after the huge price swings of the past few years? So it makes sense for them to invest abroad more. In January 2010 alone, the Chinese made direct investments worth a total of $2.4 billion in 420 overseas enterprises in 75 countries and regions. The overwhelming majority of these were in Asia and Africa. The biggest sectors were mining, transportation and petrochemicals. Across Africa, the Chinese mode of operation is now well established. Typical deals exchange highway and other infrastructure investments for long leases of mines or agricultural land, with no questions asked about human rights abuses or political corruption.
Growing overseas investment in natural resources not only makes sense as a diversification strategy to reduce China’s exposure to the risk of dollar depreciation. It also allows China to increase its financial power, not least through its vast and influential sovereign wealth fund. And it justifies ambitious plans for naval expansion. In the words of Rear Admiral Zhang Huachen, deputy commander of the East Sea Fleet: “With the expansion of the country’s economic interests, the navy wants to better protect the country’s transportation routes and the safety of our major sea-lanes.” The South China Sea has already been declared a “core national interest,” and deep-water ports are projected in Pakistan, Burma and Sri Lanka.
Finally, and contrary to the view that China is condemned to remain an assembly line for products “designed in California,” the country is innovating more, aiming to become, for example, the world’s leading manufacturer of wind turbines and photovoltaic panels. In 2007 China overtook Germany in terms of new patent applications. This is part of a wider story of Eastern ascendancy. In 2008, for the first time, the number of patent applications from China, India, Japan and South Korea exceeded those from the West.
The dilemma posed to the “departing” power by the “arriving” power is always agonizing. The cost of resisting Germany’s rise was heavy indeed for Britain; it was much easier to slide quietly into the role of junior partner to the U.S. Should America seek to contain China or to accommodate it? Opinion polls suggest that ordinary Americans are no more certain how to respond than the president. In a recent survey by the Pew Research Center, 49% of respondents said they did not expect China to “overtake the U.S. as the world’s main superpower,” but 46% took the opposite view.
Coming to terms with a new global order was hard enough after the collapse of the Soviet Union, which went to the heads of many Western commentators. (Who now remembers talk of American hyperpuissance without a wince?) But the Cold War lasted little more than four decades, and the Soviet Union never came close to overtaking the U.S. economically. What we are living through now is the end of 500 years of Western predominance. This time the Eastern challenger is for real, both economically and geopolitically.
The gentlemen in Beijing may not be the masters just yet. But one thing is certain: They are no longer the apprentices.
Niall Ferguson is a professor of history at Harvard University and a professor of business administration at the Harvard Business School. His next book, “Civilization: The West and the Rest,” will be published in March.
Full article and photos:
In reading the nearly 700 reader responses to my Oct. 17 essay for The Stone, (“Morals Without God?“) I notice how many readers are relieved to see that there are shades of gray when it comes to the question whether morality requires God. I believe that such a discussion needs to revolve around both the distant past, in which religion likely played little or no role if we go back far enough, and modern times, in which it is hard to disentangle morality and religion. The latter point seemed obvious to me, yet proved controversial. Even though 90 percent of my text questions the religious origins of human morality, and wonders if we need a God to be good, it is the other 10 percent — in which I tentatively assign a role to religion — that drew most ire. Atheists, it seems (at least those who responded here) don’t like any less than 100 percent agreement with their position.
To have a productive debate, religion needs to recognize the power of the scientific method and the truths it has revealed, but its opponents need to recognize that one cannot simply dismiss a social phenomenon found in every major society. If humans are inherently religious, or at least show rituals related to the supernatural, there is a big question to be answered. The issue is not whether or not God exists — which I find to be a monumentally uninteresting question defined, as it is, by the narrow parameters of monotheism — but why humans universally feel the need for supernatural entities. Is this just to stay socially connected or does it also underpin morality? And if so, what will happen to morality in its absence?
Just raising such an obvious issue has become controversial in an atmosphere in which public forums seem to consist of pro-science partisans or pro-religion partisans, and nothing in between. How did we arrive at this level of polarization, this small-mindedness, as if we are taking part in the Oxford Debating Society, where all that matters is winning or losing? It is unfortunate when, in discussing how to lead our lives and why to be good — very personal questions — we end up with a shouting match. There are in fact no answers to these questions, only approximations, and while science may be an excellent source of information it is simply not designed to offer any inspiration in this regard. It used to be that science and religion went together, and in fact (as I tried to illustrate with Bosch’s paintings) Western science ripened in the bosom of Christianity and its explicit desire for truth. Ironically, even atheism may be looked at as a product of this desire, as explained by the philosopher John Gray:
Christianity struck at the root of pagan tolerance of illusion. In claiming that there is only one true faith, it gave truth a supreme value it had not had before. It also made disbelief in the divine possible for the first time. The long-delayed consequence of the Christian faith was an idolatry of truth that found its most complete expression in atheism. (Straw Dogs, 2002).
Those who wish to remove religion and define morality as the pursuit of scientifically defined well-being (à la Sam Harris) should read up on earlier attempts in this regard, such as the Utopian novel “Walden Two” by B. F. Skinner, who thought that humans could achieve greater happiness and productivity if they just paid better attention to the science of reward and punishment. Skinner’s colleague John Watson even envisioned “baby factories” that would dispense with the “mawkish” emotions humans are prone to, an idea applied with disastrous consequences in Romanian orphanages. And talking of Romania, was not the entire Communist experiment an attempt at a society without God? Apart from the question of how moral these societies turned out to be, I find it intriguing that over time Communism began to look more and more like a religion itself. The singing, marching, reciting of poems and pledges and waving in the air of Little Red Books smacked of holy fervor, hence my remark that any movement that tries to promote a certain moral agenda — even while denying God — will soon look like any old religion. Since people look up to those perceived as more knowledgeable, anyone who wants to promote a certain social agenda, even one based on science, will inevitably come face to face with the human tendency to follow leaders and let them do the thinking.
What I would love to see is a debate among moderates. Perhaps it is an illusion that this can be achieved on the Internet, given how it magnifies disagreements, but I do think that most people will be open to a debate that respects both the beliefs held by many and the triumphs of science. There is no obligation for non-religious people to hate religion, and many believers are open to interrogating their own convictions. If the radicals on both ends are unable to talk with each other, this should not keep the rest of us from doing so.
Frans B. M. de Waal is a biologist interested in primate behavior. He is C. H. Candler Professor in Psychology, and Director of the Living Links Center at the Yerkes National Primate Research Center at Emory University, in Atlanta, and a member of the National Academy of Sciences and the Royal Dutch Academy of Sciences. His latest book is “The Age of Empathy.”
Half a century ago the British scientist and novelist C. P. Snow bemoaned the estrangement of what he termed the “two cultures” in modern society — the literary and the scientific. These days, there is some reason to celebrate better communication between these domains, if only because of the increasingly visible salience of scientific ideas. Still a gap remains, and so I’d like here to take an oblique look at a few lesser-known contrasts and divisions between subdomains of the two cultures, specifically those between stories and statistics.
I’ll begin by noting that the notions of probability and statistics are not alien to storytelling. From the earliest of recorded histories there were glimmerings of these concepts, which were reflected in everyday words and stories. Consider the notions of central tendency — average, median, mode, to name a few. They most certainly grew out of workaday activities and led to words such as (in English) “usual,” “typical.” “customary,” “most,” “standard,” “expected,” “normal,” “ordinary,” “medium,” “commonplace,” “so-so,” and so on. The same is true about the notions of statistical variation — standard deviation, variance, and the like. Words such as “unusual,” “peculiar,” “strange,” “original,” “extreme,” “special,” “unlike,” “deviant,” “dissimilar” and “different” come to mind. It is hard to imagine even prehistoric humans not possessing some sort of rudimentary idea of the typical or of the unusual. Any situation or entity — storms, animals, rocks — that recurred again and again would, it seems, lead naturally to these notions. These and other fundamentally scientific concepts have in one way or another been embedded in the very idea of what a story is — an event distinctive enough to merit retelling — from cave paintings to “Gilgamesh” to “The Canterbury Tales,” onward.
The idea of probability itself is present in such words as “chance,” “likelihood,” “fate,” “odds,” “gods,” “fortune,” “luck,” “happenstance,” “random,” and many others. A mere acceptance of the idea of alternative possibilities almost entails some notion of probability, since some alternatives will be come to be judged more likely than others. Likewise, the idea of sampling is implicit in words like “instance,” “case,” “example,” “cross-section,” “specimen” and “swatch,” and that of correlation is reflected in “connection,” “relation,” “linkage,” “conjunction,” “dependence” and the ever too ready “cause.” Even hypothesis testing and Bayesian analysis possess linguistic echoes in common phrases and ideas that are an integral part of human cognition and storytelling. With regard to informal statistics we’re a bit like Moliere’s character who was shocked to find that he’d been speaking prose his whole life.
Despite the naturalness of these notions, however, there is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled. A drily named distinction from formal statistics is relevant: we’re said to commit a Type I error when we observe something that is not really there and a Type II error when we fail to observe something that is there. There is no way to always avoid both types, and we have different error thresholds in different endeavors, but the type of error people feel more comfortable may be telling. It gives some indication of their intellectual personality type, on which side of the two cultures (or maybe two coutures) divide they’re most comfortable.
People who love to be entertained and beguiled or who particularly wish to avoid making a Type II error might be more apt to prefer stories to statistics. Those who don’t particularly like being entertained or beguiled or who fear the prospect of making a Type I error might be more apt to prefer statistics to stories. The distinction is not unrelated to that between those (61.389% of us) who view numbers in a story as providing rhetorical decoration and those who view them as providing clarifying information.
The so-called “conjunction fallacy” suggests another difference between stories and statistics. After reading a novel, it can sometimes seem odd to say that the characters in it don’t exist. The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true. Congressman Smith is known to be cash-strapped and lecherous. Which is more likely? Smith took a bribe from a lobbyist or Smith took a bribe from a lobbyist, has taken money before, and spends it on luxurious “fact-finding” trips with various pretty young interns. Despite the coherent story the second alternative begins to flesh out, the first alternative is more likely. For any statements, A, B, and C, the probability of A is always greater than the probability of A, B, and C together since whenever A, B, and C all occur, A occurs, but not vice versa.
This is one of many cognitive foibles that reside in the nebulous area bordering mathematics, psychology and storytelling. In the classic illustration of the fallacy put forward by Amos Tversky and Daniel Kahneman, a woman named Linda is described. She is single, in her early 30s, outspoken, and exceedingly smart. A philosophy major in college, she has devoted herself to issues such as nuclear non-proliferation. So which of the following is more likely?
a.) Linda is a bank teller.
b.) Linda is a bank teller and is active in the feminist movement.
Although most people choose b.), this option is less likely since two conditions must be met in order for it to be satisfied, whereas only one of them is required for option a.) to be satisfied.
(Incidentally, the conjunction fallacy is especially relevant to religious texts. Imbedding the God character in a holy book’s very detailed narrative and building an entire culture around this narrative seems by itself to confer a kind of existence on Him.)
Yet another contrast between informal stories and formal statistics stems from the extensional/intensional distinction. Standard scientific and mathematical logic is termed extensional since objects and sets are determined by their extensions, which is to say by their member(s). Mathematical entities having the same members are the same even if they are referred to differently. Thus, in formal mathematical contexts, the number 3 can always be substituted for, or interchanged with, the square root of 9 or the largest whole number smaller than pi without affecting the truth of the statement in which it appears.
In everyday intensional (with an s) logic, things aren’t so simple since such substitution isn’t always possible. Lois Lane knows that Superman can fly, but even though Superman and Clark Kent are the same person, she doesn’t know that Clark Kent can fly. Likewise, someone may believe that Oslo is in Sweden, but even though Oslo is the capital of Norway, that person will likely not believe that the capital of Norway is in Sweden. Locutions such as “believes that” or “thinks that” are generally intensional and do not allow substitution of equals for equals.
The relevance of this to probability and statistics? Since they’re disciplines of pure mathematics, their appropriate logic is the standard extensional logic of proof and computation. But for applications of probability and statistics, which are what most people mean when they refer to them, the appropriate logic is informal and intensional. The reason is that an event’s probability, or rather our judgment of its probability, is almost always affected by its intensional context.
Consider the two boys problem in probability. Given that a family has two children and that at least one of them is a boy, what is the probability that both children are boys? The most common solution notes that there are four equally likely possibilities — BB, BG, GB, GG, the order of the letters indicating birth order. Since we’re told that the family has at least one boy, the GG possibility is eliminated and only one of the remaining three equally likely possibilities is a family with two boys. Thus the probability of two boys in the family is 1/3. But how do we come to think that, learn that, believe that the family has at least one boy? What if instead of being told that the family has at least one boy, we meet the parents who introduce us to their son? Then there are only two equally like possibilities — the other child is a girl or the other child is a boy, and so the probability of two boys is 1/2.
Many probability problems and statistical surveys are sensitive to their intensional contexts (the phrasing and ordering of questions, for example). Consider this relatively new variant of the two boys problem. A couple has two children and we’re told that at least one of them is a boy born on a Tuesday. What is the probability the couple has two boys? Believe it or not, the Tuesday is important, and the answer is 13/27. If we discover the Tuesday birth in slightly different intensional contexts, however, the answer could be 1/3 or 1/2.
Of course, the contrasts between stories and statistics don’t end here. Another example is the role of coincidences, which loom large in narratives, where they too frequently are invested with a significance that they don’t warrant probabilistically. The birthday paradox, small world links between people, psychics’ vaguely correct pronouncements, the sports pundit Paul the Octopus, and the various bible codes are all examples. In fact, if one considers any sufficiently large data set, such meaningless coincidences will naturally arise: the best predictor of the value of the S&P 500 stock index in the early 1990s was butter production in Bangladesh. Or examine the first letters of the months or of the planets: JFMAMJ-JASON-D or MVEMJ-SUN-P. Are JASON and SUN significant? Of course not. As I’ve written often, the most amazing coincidence of all would be the complete absence of all coincidences.
I’ll close with perhaps the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.
In the end, whether we resonate viscerally to King Lear’s predicament in dividing his realm among his three daughters or can’t help thinking of various mathematical apportionment ideas that may have helped him clarify his situation is probably beyond calculation. At different times and places most of us can, should, and do respond in both ways.
John Allen Paulos is Professor of Mathematics at Temple University and the author of several books, including “Innumeracy,” “Once Upon a Number,” and, most recently, “Irreligion.”
Full article and photo:
Not so long ago, any young man who was so inclined could ski all winter in the mountains of Colorado or Utah on a pauper’s budget. The earnings from a part-time job cleaning toilets or washing dishes were enough to keep him gliding down the mountain by day and buzzing on cheap booze by night, during that glorious adrenaline come-down that these days often involves an expensive hot-stone massage and is unashamedly referred to as “après-ski.”
He had a pretty good run, the American ski bum, but Jeremy Evans’ “In Search of Powder” suggests that the American West’s cold-weather counterculture is pretty much cashed. From Vail to Sun Valley, corporate-owned ski resorts have driven out family-run facilities, and America’s young college grads have mostly ceded control of the lift lines to students from south of the equator on summer break.
Mr. Evans, a newspaper reporter who himself “ignored the next logical step in adult life” to live in snowy Lake Tahoe, identifies with the graying powder hounds that fill his pages, and for the most part he shares their nostalgia for the way things used to be.
During the 1960s and 1970s, in alpine enclaves like Park City, Utah, and Aspen, Colo., hippies squatted in old miner’s shacks and clashed with rednecks. In Tahoe, Bay Area youths developed the liberated style of skiing known as “hot-dogging,” while in Jackson Hole, Wyo., stylish Europeans such as Jean-Claude Killy and Pepi Stiegler went one step further, inspiring generations of American youth to become daredevils on skis—and, eventually, to get paid for it.
Whether these ski bums intended to or not, they helped popularize the sport and make it profitable. What followed was reminiscent of urban gentrification. As the second-home owners took over, property prices outpaced local wages. Today’s would-be ski bum faces prohibitive commutes, and immigrant workers have taken over the sorts of menial jobs that carefree skier types once happily performed.
Skiing and snowboarding aren’t even a ski resort’s main attraction anymore. Rather, they are marketing tools used to boost real-estate sales and entice tourists who would just as soon go on a cruise or take a trip to Las Vegas. Four corporations—Vail Resorts, Booth Creek, Intrawest and American Skiing Co.—run most of the big mountains. Even Telluride, once considered remote and wild, plays host to Oprah Winfrey, Tom Cruise and a parade of summer festivals.
In 2002, Hal Clifford took the corporate ski industry to task in “Downhill Slide.” Mr. Evans’s book incorporates Mr. Clifford’s most salient findings, but his oral-history method limits his view of the topic. Mr. Evans would have done well, in particular, to mine the obvious connections between ski and surf culture. (Dick Barrymore’s landmark 1969 film, “Last of the Ski Bums,” was more or less an alpine remake of the classic 1966 surfer documentary “The Endless Summer.”)
Like surfing, skiing first went from sport to lifestyle during the 1960s and thus came of age with the baby boomers. They made skiing sexy and rebellious, and then they made it a big business. Two of America’s most exclusive mountain resorts, Beaver Creek (in Colorado) and Deer Valley (in Utah), opened around 1980—right when baby boomers hit a sweet spot in terms of athleticism and net worth. Now, as they age, golf courses and luxury spas have become de rigueur in big ski towns.
Another boomer legacy is the entire notion that the old-fashioned ski bum was a blessed soul who somehow belonged to the natural order. More likely, his was just a brief, Shangri-La moment. For ski towns in the West, the real problem is what will happen as the prosperous, once free-spirited baby-boomer generation begins to wane.
Mr. Hartman contributes to Style.com and VanityFair.com.
Full article and photo:
This is the ninth in a series.
The subject of this column is caricature, but I’m not going to explain or demonstrate it myself. When the art god was doling out the syrup of graphic wit, he must have slipped on a banana peel just as he got to my cup and most of it spilled out on the floor. This being the case, I have chosen three artists whose cups of graphic wit truly runneth over and whose work represents caricature at its highest and most droll level of accomplishment.
Two are friends of many years and are literary wits as well as being celebrated artists: Edward Sorel, whose covers for the New Yorker are legendary, and Robert Grossman, whose animated films, comic strips and sculptures are both political and hilarious. The third artist, Tom Bachtell, creates stylish drawings for The New Yorker every week and, memorably, for many months played graphic games with George Bush’s eyebrows.
I asked each of the artists to create a caricature of Pablo Picasso and to give us whatever back story on their process that they choose to share. I think the results show that in order to draw funny, it really helps to be able to free-associate with fish, ex-wives and square eyes.
So here’s Picasso — three ways.
Thought process: Picasso. Intense gaze. Makes sense in his case. One of his gimmicks was to put both eyes on one side of a face, which nature had only ever done in the instance of the flounder. Can I show him as a flounder?
Refining the flounder concept until I realize I’m the one who’s floundering.
Pablo in Art Heaven glaring down at the puny efforts of mere mortals.
“I work in brush and ink. I drew the face a dozen times, playing with various brushes, strokes, line weight and other ways of applying the ink. I started to imagine the face on the surface of the paper and chase after it with the brush, trying to capture the squat, vigorous, self-confident poser that I see when I think of Picasso, those black eyes blazing out at the viewer. Since he often broke faces into different, distorted planes I felt free to do that, as well as making his eyes into squares and his nose into a Guernica-like protuberance.”
In the next column I introduce the challenge and the possibility of drawing the figure.
James McMullan, New York Times
Full article and photos:
I have spent almost a quarter century photographing philosophers. For the most part, philosophers exist, and have always existed, outside the public spotlight. Yet when we reflect upon those eras of humankind that burn especially bright, it is largely the philosophers that we remember. Despite being unknown at a time, the philosophers of an era survive longer in collective memory than wealthy nobleman and politicians, or the popular figures of stage, song and stadium. Because of this disconnect between living fame and later recognition, we have less of a record of these thinkers than we should. Our museums are filled with busts and paintings of long-forgotten wealth and beauty instead of the philosophers who have so influenced contemporary politics and society. My aim in this project has been the modest one of making sure that, for this era at least, there is some record of the philosophers.
I did not initially plan to spend more than 20 years with philosophers. It was the fall of 1988. I was working for a number of different magazines, primarily The Face, taking photographs of the kind of cultural figures who typically rise to public awareness — musicians, artists, actors and novelists. One day I received an assignment to photograph the philosopher Sir Alfred Ayer. Ayer was dimly known to me, England being one of those rare countries in which each era has a few public philosophers — respected thinkers who are called upon to comment on the issues of the day. When I was growing up in the Midlands of the United Kingdom in the 1960s, that figure was Bertrand Russell. Later, he was replaced by Ayer. Still, I knew very little about Ayer other than what I recalled from snippets on BBC question time.
I was told in advance that he was very ill, and that my time with him would be limited to 10 minutes. When I walked into the room, he was wearing an oxygen mask. There were two women in the room. I can’t remember how we got beyond those evident barriers — the social and the physical — but I remained with him for four hours. We talked about many things, but mainly the Second World War. Apparently, many Oxford philosophers had been involved in the war effort, in intelligence. I recall in particular a story Ayer told me about having saved De Gaulle’s life from a faction of the French resistance.
I can’t identify why I found him such a compelling and fascinating figure. Partly it was him. But it was also the fact that philosophers come with a certain combination of mystery and weight. Our discussion gave me a burn to meet more philosophers. That is how my project started.
Philosophy is not the only profession I have cataloged. For example, I also have taken over the years many photographs of filmmakers. But my relationship with filmmakers is very different than my relationship with philosophers. My extensive experience with film gives me the ability to make my own judgments of relative merit. A sophisticated appreciation of film is something that many of us can and do cultivate. In the case of philosophers, however, I am, like most people, at sea. The philosophers whose work is most admired by other philosophers are very different from the philosophers who occasionally float to public consciousness. These are not people with connections to the larger world of media (one thing I have learned over these many years is that the cast of mind that leads one to philosophy is rarely one that lends itself to networking). I could only hope to be guided to them by those who had themselves struggled with its problems.
After my meeting with Ayer, I devised a plan to ask each philosopher I photographed for three names of philosophers they admired. Initially, I planned to meet and photograph perhaps 15 philosophers, and publish the results in a magazine. I certainly had no plan to spend the next quarter century pursuing philosophers around the globe. But Ayer had given me the names of four — Isaiah Berlin, Michael Dummett, Ted Honderich and Peter Strawson. Each of them in turn gave me three names, and there was not as much overlap as I had expected. My initial plan had to be modified. Soon, I settled on a formula. If a philosopher was mentioned by three different philosophers as someone whose work was important, I would photograph that philosopher. Of course, employing this formula required meeting many more philosophers. The idea of a short project with 15 photographs was rapidly shelved. To date, the list of those I’ve photographed is nearly 200 names long.
Throughout my career I have had to pursue my work with philosophers while making a living with my other professional work, and the cost has sometimes been high. But like any artist who has completed a large and demanding project, I have had good fortune.
Early in my career, I lived not far from Oxford University, a great center for philosophy for centuries. At that time, it employed many of the people whose names were mentioned most by other philosophers. In 2004, I moved to New York to take up a position at The New Yorker. The departments of philosophy at New York University and Rutgers University, like Oxford, are also staffed by many of the figures most mentioned by other philosophers. The New York area also has many other first-rate philosophy departments. Philosophers are a garrulous and argumentative species. Their chief form of social interaction is the lecture, which is typically an hour long, and followed by an hour of probing and aggressive objections from the audience. If one of the figures mentioned three times by other philosophers was not teaching at one of these departments, they almost certainly came to lecture at one of them at some point over the last seven years. My project has benefited from this happenstance. No doubt, I have missed many philosophers worthy of photographing. But had I not been in New York these past six years, I would have missed many more.
In the course of my work, I knew that most appreciators of art, even the most educated, would have but a dim window on the views of the philosophers I was photographing. So I asked each philosopher I photographed to supply 50 words summarizing their work or their view of philosophy (perhaps not surprisingly, several exceeded that limit.) These statements are as much a part of the work as the pictures themselves. Statement and portrait together form a single image of a thinker.
Most philosophers have spent their entire lives in intense concentration, developing and defending lines of argument that can withstand the fearsome critical scrutiny of their peers. Perhaps this leaves some mark on their faces; to that I leave others to judge.
Steve Pyke, a contributing photographer at The New Yorker and at Vanity Fair since 1998, has recently completed a series of portraits of the Apollo astronauts. The second volume of “Philosophers,” with more than 100 portraits, will be published by Oxford University Press in May 2011.
Full article and photos:
To be an effective speaker, you first have to win the confidence of your audience. In my case, I’m usually working with very talented young tennis players, often teenagers. My goal is to help them to develop into world-class competitors. To achieve this, they need to view me as a benevolent dictator. I have to persuade them that I know what I’m talking about and that they should listen to me, but I also need them to understand that I care about them, as players and people.
Serena Williams needed to learn to play each point as if it were match point at Wimbledon.
Establishing your authority or expertise is not a matter of bravado. It depends on the needs of individual players and what they respond to best. Jim Courier was a player who responded to toughness, and I’d use a firm voice to kick him into gear. When I was trying to teach Serena Williams that she had to play every shot as if it were match point at Wimbledon, we were often in each other’s faces, just short of body contact, when we exchanged our thoughts about a rally or point. With Monica Seles, by contrast, I was always sure to use kid gloves.
Sometimes you show that you know your stuff by saying nothing. In the late ’90s, I was asked to be the coach of Boris Becker, who had already achieved greatness as the youngest player ever to win Wimbledon. I went to Munich with the goal of getting Boris back into physical shape and reviewing every aspect of his game. I watched and watched for two weeks, and never said a word. Boris finally turned to me and said “Mr. B, can you talk?” My answer, “When I talk to you, I better know what to say.” His answer: “Mr. B, we will get along real well.”
You can’t remake someone’s game all at once, so my brand of instruction has always been to give simple words of advice. When I focused on footwork with Yuki Bhambri (currently the No. 1 junior men’s player in the world), I let him know that the recovery step is crucial to any level of play, especially if you hit from a neutral stance. Yuki would respond to these simple verbal tips, and then I’d demonstrate to him exactly what I meant.
At the beginning of my career, I found it very difficult to listen to anyone, but you can’t imagine how much more persuasive you become when you listen well. In today’s world, young people expect to be heard, and their input very often helps you to work with them.
As a student at my academy 25 years ago, Andre Agassi was always testing the boundaries, especially with the dress code. His hair was long and dyed, he wore nail polish, he wouldn’t wear regular tennis clothes. My first instinct was to make him conform, but I still remember one year when he stopped by my office, before going home for Christmas. “Nick,” he said, “the head of the school wants me to cut my hair and dress a little different. Can you please change his mind?” I listened, and we decided to let him be himself.
With a player like Andre, as with anyone you’re trying to motivate, you have to get a sense of their individual spirit and try to harness it to develop top talent. Benevolence was not enough with Andre, but it had to be part of the mix.
Nick Bollettieri founded Nick Bollettieri Tennis Academy in 1978 and is the president of IMG Academies.
Full article and photo:
Polemic seems to have gone the way of the typewriter and the soda fountain. The word was once associated with the best practitioners of the form: Voltaire, Jonathan Swift, George Orwell, Rebecca West. Nowadays, if you say “polemic,” you get strange looks, as if you were referring not to refined argument, especially written argument, but to some sort of purgative.
Of course, polemic isn’t just an argument. It’s a literary offensive prosecuted with the goal not only of winning your point but of demolishing your opponent’s entire case. The name-calling and yelling on cable TV and the Internet have made us forget that the classic polemic is a dazzling, meticulously crafted statement.
It was precisely the intellectually demanding nature of polemic that enthralled me when I was a teenager. While my pals were out driving around, I spent dusty summers in my town library, where I lost myself in back issues of Partisan Review and Commentary. Debating with the world seemed the best way to make my way into the world. James Joyce once said that words were a poor boy’s arsenal. For me, it was words built into arguments.
My father was a professional pianist, and though I never learned to play the piano with great proficiency, I was entranced by the function of the pedals. You could make the notes quaver, persist, echo or abruptly stop. Eventually, I found that to make a forceful polemic, the argument had to have pedals. That way, you could even make it beautiful.
A polemic isn’t just an argument. The goal is to demolish your opponent’s entire case.
With the right use of rhetorical pedals, a writer can pour into polemic the care and craft that poets and novelists invest in their art. Confronted by a piece of lofty illogic, you can draw some telling counterpoint from everyday life. An armchair warrior, for instance, might be put in his place by this: “Any man who considers war an exercise in nobility is either not a father or an unfeeling one.” You can deploy a literary allusion or historical reference to show that a pompous emperor lacks clothes, as it were. Does a writer sentimentalize Africans as innocent children, corrupted by Western imperialism? Introduce her to the African role in the slave trade or the recent history of the Congo.
The most effective polemic, I have always found, is actually a kind of musical reinterpretation of someone else’s argument. Arguments are rarely wrong. Rather, they are ethically or intellectually unfinished. To return to my example above: Though it is hardly the whole story, imperialism did indeed help to set Africa on a destructive path. A successful polemic would have to concede the point, before completing it with the larger point that attributing Africa’s problems to Western imperialism is yet another way of robbing Africans of their independence.
The most successful polemic against something must also be a form of understanding the thing that you are disputing. Empathy is indispensable to a forensic drubbing. A talented polemicist should win some converts from the opposition simply by making the case that his opponent’s argument isn’t crazed or pernicious, just radically incomplete.
In our age of feral opining, however, we have no use for patient absorption in someone else’s logic and rhetoric. Better to scream obscenities into the white void of your screen, turn off your computer and….But here I go, succumbing to my great passion and starting to polemicize.
Which brings me to my final point: Disappointed love, not hatred or aversion, is the strongest motivation behind the urge to make an argument sing, rather than shout. And that is why the screamers on every side leave me so cold. They argue without love— and they are not true polemicists.
Lee Siegel is a columnist and editor at large for the New York Observer.
Full article and photo:
“Have you ever written political speeches? It’s a particularly low form of rhetoric.”
Thus did Arthur M. Schlesinger Jr., who knew a few things about writing political speeches, welcome me to the guild. It was early 1998, and I was just preparing to join the White House staff as a speechwriter for President Bill Clinton—who, by the way, did not disagree with Mr. Schlesinger. Mr. Clinton’s face soured when he said “rhetoric.” To him it was an epithet. “I don’t want rhetoric,” he’d complain. “I actually want to say something.”
Political rhetoric has a bad rap. Except in certain college courses—where the speeches of Lincoln and Churchill are pinned down like dead frogs and inspected for signs of logos, elocutio, and aposiopesis—rhetoric is held beneath contempt. And not without cause. The speeches of this wretched campaign year have brought to mind what H.L. Mencken said of President Warren G. Harding’s rhetoric: “It reminds me of a string of wet sponges…of stale bean-soup, of college yells, of dogs barking idiotically through endless nights. It is so bad that a sort of grandeur creeps into it.”
Former Clinton speechwriter Jeff Shesol on the art of the political speech.
I’ll concede that speechwriters are part of the problem. We tend to blame the person at the podium, but the enemy, let’s admit it, is us, too. Speechwriters have an unhealthy affection for alliteration (“nattering nabobs of negativism”). We cling to clichés (see above, “the enemy is us”). And we tend to believe that phrases like “take back our country,” if repeated enough, might come to mean something.
But smart speechwriting still has the power to inspire, educate, entertain—and to make a difference. As a writer, the first thing to remember is that a good speech has a point. It is purpose-driven. It is ends-oriented. Its true test is not whether it gets quoted in Bartlett’s, but whether it gets people to embrace an idea, support a bill, throw a bum out, invest in plastics or stop feeding their kids potato chips for breakfast.
But how is this alchemy achieved? Google “speechwriting,” and you’ll enter a thicket of rules, tips and tricks of the trade. Many of them work as advertised. But formulas only carry you so far. Speechwriters reach for whatever tools they need—reason, emotion, repetition, humor, statistics, stories—to frame and win an argument. Organization is always important. To give the speech forward momentum, a kind of inevitability, certain ideas must be established in a certain sequence.
Every word should serve that goal. “Why is that in there?” President Clinton would ask us, before drawing a neat, black line through the offending phrase. Occasionally, we’d return the favor: under cover of darkness at the 2000 Democratic National Convention, I struck a sentence the president had dictated about his administration’s success in reducing the rate of salmonella infections.
This single-mindedness was apparent in every line of the 2008 speech in which Bill Gates introduced his concept of “creative capitalism.” Through real-world examples and a rousing call to action, he showed how institutions can “stretch the reach of market forces so that more people can make a profit, or gain recognition, doing work that eases the world’s inequities.” Or consider Steve Jobs. In his commencement address at Stanford in 2005, he told very personal stories, including one about “facing death” as he battled cancer, to inspire graduates to live by their own intuition.
Such speeches redeem the promise of the spoken word and show that, this campaign season notwithstanding, rhetoric need not be at odds with reality.
Jeff Shesol, a partner at West Wing Writers, is the author of “Supreme Power: Franklin Roosevelt vs. the Supreme Court.”
Full article and photo:
As of today, the Troubled Asset Relief Program, known as TARP, the emergency bailouts born in the financial panic of 2008, is no more. Done. Finished. Kaput.
Last month the Congressional Oversight Panel issued a report assessing the program. It makes for grim reading. Once it is conceded that government intervention was necessary and generally successful in heading off an economic disaster, the narrative heads downhill quickly: TARP was badly mismanaged, the report says, it created significant moral hazard and failed miserably in providing mortgage foreclosure relief.
That may not seem like a shocking revelation. Everyone left, right, center, red state, blue state, even Martians — hated the bailout of Wall Street, apart of course from the bankers and dealers themselves, who could not even manage a grace moment of red-faced shame before they eagerly restocked their far from empty vaults. A perhaps bare majority, or more likely just a significant minority, nonetheless thought the bailouts were necessary. But even those who thought them necessary were grieved and repulsed. There was, I am suggesting, no moral disagreement about TARP and the bailouts — they stank. The only significant disagreement was practical and causal: would the impact of not bailing out the banks be catastrophic for the economy as a whole or not? No one truly knew the answer to this question, but that being so the government decided that it could not and should not play roulette with the future of the nation and did the dirty deed.
That we all agreed about the moral ugliness of the bailouts should have led us to implementing new and powerful regulatory mechanisms. The financial overhaul bill that passed congress in July certainly fell well short of what would be necessary to head-off the next crisis. Clearly, political deal-making and the influence of Wall Street over our politicians is part of the explanation for this failure; but the failure also expressed continuing disagreement about the nature of the free market. In pondering this issue I want to, again, draw on the resources of Georg W.F. Hegel. He is not, by a long shot, the only philosopher who could provide a glimmer of philosophical illumination in this area. But the primary topic of his practical philosophy was analyzing the exact point where modern individualism and the essential institutions of modern life meet. And right now, this is also where many of the hot-button topics of the day reside.
Hegel, of course, never directly wrote about Wall Street, but he was philosophically invested in the logic of market relations. Near the middle of the “Phenomenology of Spirit” (1807), he presents an argument that says, in effect: if Wall Street brokers and bankers understood themselves and their institutional world aright, they would not only accede to firm regulatory controls to govern their actions, but would enthusiastically welcome regulation. Hegel’s emphatic but paradoxical way of stating this is to say that if the free market individualist acts “in [his] own self-interest, [he] simply does not know what [he] is doing, and if [he] affirms that all men act in their own self-interest, [he] merely asserts that all men are not really aware of what acting really amounts to.” For Hegel, the idea of unconditioned rational self-interest — of, say, acting solely on the motive of making a maximal profit — simply mistakes what human action is or could be, and is thus rationally unintelligible. Self-interested action, in the sense it used by contemporary brokers and bankers, is impossible. If Hegel is right, there may be deeper and more basic reasons for strong market regulation than we have imagined.
The “Phenomenology” is a philosophical portrait gallery that presents depictions, one after another, of different, fundamental ways in which individuals and societies have understood themselves. Each self-understanding has two parts: an account of how a particular kind of self understands itself and, then, an account of the world that the self considers its natural counterpart. Hegel narrates how each formation of self and world collapses because of a mismatch between self-conception and how that self conceives of the larger world. Hegel thinks we can see how history has been driven by misshapen forms of life in which the self-understanding of agents and the worldly practices they participate in fail to correspond. With great drama, he claims that his narrative is a “highway of despair.”
The discussion of market rationality occurs in a section of the “Phenomenology” called “Virtue and the way of the world.” Believing in the natural goodness of man, the virtuous self strives after moral self-perfection in opposition to the wicked self-interested practices of the marketplace, the so-called “way of the world.” Most of this section is dedicated to demonstrating how hollow and absurd is the idea of a “knight of virtue” — a fuzzy, liberal Don Quixote tramping around a modern world in which the free market is the central institution. Against the virtuous self’s “pompous talk about what is best for humanity and about the oppression of humanity, this incessant chatting about the sacrifice of the good,” the “way of the world” is easily victorious.
However, what Hegel’s probing account means to show is that the defender of holier-than-thou virtue and the self-interested Wall Street banker are making the same error from opposing points of view. Each supposes he has a true understanding of what naturally moves individuals to action. The knight of virtue thinks we are intrinsically good and that acting in the nasty, individualist, market world requires the sacrifice of natural goodness; the banker believes that only raw self-interest, the profit motive, ever leads to successful actions.
Both are wrong because, finally, it is not motives but actions that matter, and how those actions hang together to make a practical world. What makes the propounding of virtue illusory — just so much rhetoric — is that there is no world, no interlocking set of practices into which its actions could fit and have traction: propounding peace and love without practical or institutional engagement is delusion, not virtue. Conversely, what makes self-interested individuality effective is not its self-interested motives, but that there is an elaborate system of practices that supports, empowers, and gives enduring significance to the banker’s actions. Actions only succeed as parts of practices that can reproduce themselves over time. To will an action is to will a practical world in which actions of that kind can be satisfied — no corresponding world, no satisfaction. Hence the banker must have a world-interest as the counterpart to his self-interest or his actions would become as illusory as those of the knight of virtue. What bankers do, Hegel is urging, is satisfy a function within a complex system that gives their actions functional significance.
Actions are elements of practices, and practices give individual actions their meaning. Without the game of basketball, there are just balls flying around with no purpose. The rules of the game give the action of putting the ball through the net the meaning of scoring, where scoring is something one does for the sake of the team. A star player can forget all this and pursue personal glory, his private self-interest. But if that star — say, Kobe Bryant — forgets his team in the process, he may, in the short term, get rich, but the team will lose. Only by playing his role on the team, by having an L.A. Laker interest as well as a Kobe Bryant interest, can he succeed. I guess in this analogy, Phil Jackson has the role of “the regulator.”
The series of events leading up to near economic collapse have shown Wall Street traders and bankers to be essentially knights of self-interest — bad Kobe Bryants. The function of Wall Street is the allocation of capital; as Adam Smith instructed, Wall Street’s task is to get capital to wherever it will do the most good in the production of goods and services. When the financial sector is fulfilling its function well, an individual banker succeeds only if he is routinely successful in placing investors’ capital in businesses that over time are profitable. Time matters here because what must be promoted is the practice’s capacity to reproduce itself. In this simplified scenario, Wall Street profits are tightly bound to the extra wealth produced by successful industries.
Every account of the financial crisis points to a terrifying series of structures that all have the same character: the profit-driven actions of the financial sector became increasingly detached from their function of supporting and advancing the growth of capital. What thus emerged were patterns of action which, may have seemed to reflect the “ways of the world” but in financial terms, were as empty as those of a knight of virtue, leading to the near collapse of the system as a whole. A system of compensation that provides huge bonuses based on short-term profits necessarily ignores the long-term interests of investors. As does a system that ignores the creditworthiness of borrowers; allows credit rating agencies to be paid by those they rate and encourages the creation of highly complex and deceptive financial instruments. In each case, the actions — and profits — of the financial agents became insulated from both the interests of investors and the wealth-creating needs of industry.
Despite the fact that we have seen how current practices are practically self-defeating for the system as a whole, the bill that emerged from the Congress comes nowhere near putting an end to the practices that necessitated the bailouts. Every one of those practices will remain in place with just a veneer of regulation giving them the look of legitimacy.
What market regulations should prohibit are practices in which profit-taking can routinely occur without wealth creation; wealth creation is the world-interest that makes bankers’ self-interest possible. Arguments that market discipline, the discipline of self-interest, should allow Wall Street to remain self-regulating only reveal that Wall Street, as Hegel would say, “simply does not know what it is doing.”
We know that nearly all the financial conditions that led to the economic crisis were the same in Canada as they were in the United States with a single, glaring exception: Canada did not deregulate its banks and financial sector, and, as a consequence, Canada avoided the worst of the economic crisis that continues to warp the infrastructure of American life. Nothing but fierce and smart government regulation can head off another American economic crisis in the future. This is not a matter of “balancing” the interests of free-market inventiveness against the need for stability; nor is it a matter of a clash between the ideology of the free-market versus the ideology of government control. Nor is it, even, a matter of a choice between neo-liberal economic theory and neo-Keynesian theory. Rather, as Hegel would have insisted, regulation is the force of reason needed to undo the concoctions of fantasy.
J.M. Bernstein is University Distinguished Professor of Philosophy at the New School for Social Research and the author of five books. He is now completing a book entitled “Torture and Dignity.”
Full article and photo:
Ah, the airport, where modern folk heroes are made. The airport, where that inspired flight attendant did what everyone who’s ever been in the spam-in-a-can crush of a flying aluminum tube – where we collectively pretend that a clutch of peanuts is a meal and a seat cushion is a “flotation device” – has always dreamed of doing: pull the lever, blow the door, explode the chute, grab a beer, slide to the tarmac and walk through the gates to the sanity that lies beyond. Not since Rick and Louis disappeared into the Casablanca fog headed for the Free French garrison in Brazzaville has a stroll on the tarmac thrilled so many.
Who cares that the crazed steward got arrested, pleaded guilty to sundry charges, and probably was a rude, unpleasant SOB to begin with? Bonnie and Clyde were psychopaths, yet what child of the ’60s did not fall in love with Faye Dunaway and Warren Beatty?
And now three months later, the newest airport hero arrives. His genius was not innovation in getting out, but deconstructing the entire process of getting in. John Tyner, cleverly armed with an iPhone to give YouTube immortality to the encounter, took exception to the TSA guard about to give him the benefit of Homeland Security’s newest brainstorm – the upgraded, full-palm, up the groin, all-body pat-down. In a stroke, the young man ascended to myth, or at least the next edition of Bartlett’s, warning the agent not to “touch my junk.”
Not quite the 18th-century elegance of “Don’t Tread on Me,” but the age of Twitter has a different cadence from the age of the musket. What the modern battle cry lacks in archaic charm, it makes up for in full-body syllabic punch.
Don’t touch my junk is the anthem of the modern man, the Tea Party patriot, the late-life libertarian, the midterm election voter. Don’t touch my junk, Obamacare – get out of my doctor’s examining room, I’m wearing a paper-thin gown slit down the back. Don’t touch my junk, Google – Street View is cool, but get off my street. Don’t touch my junk, you airport security goon – my package belongs to no one but me, and do you really think I’m a Nigerian nut job preparing for my 72-virgin orgy by blowing my johnson to kingdom come?
In “Up in the Air,” that ironic take on the cramped freneticism of airport life, George Clooney explains why he always follows Asians in the security line:
“They pack light, travel efficiently, and they got a thing for slip-on shoes, God love ‘em.”
“I’m like my mother. I stereotype. It’s faster.”
That riff is a crowd-pleaser because everyone knows that the entire apparatus of the security line is a national homage to political correctness. Nowhere do more people meekly acquiesce to more useless inconvenience and needless indignity for less purpose. Wizened seniors strain to untie their shoes; beltless salesmen struggle comically to hold up their pants; 3-year-olds scream while being searched insanely for explosives – when everyone, everyone, knows that none of these people is a threat to anyone.
The ultimate idiocy is the full-body screening of the pilot. The pilot doesn’t need a bomb or box cutter to bring down a plane. All he has to do is drive it into the water, like the EgyptAir pilot who crashed his plane off Nantucket while intoning “I rely on God,” killing all on board.
But we must not bring that up. We pretend that we go through this nonsense as a small price paid to ensure the safety of air travel. Rubbish. This has nothing to do with safety – 95 percent of these inspections, searches, shoe removals and pat-downs are ridiculously unnecessary. The only reason we continue to do this is that people are too cowed to even question the absurd taboo against profiling – when the profile of the airline attacker is narrow, concrete, uniquely definable and universally known. So instead of seeking out terrorists, we seek out tubes of gel in stroller pouches.
The junk man’s revolt marks the point at which a docile public declares that it will tolerate only so much idiocy. Metal detector? Back-of-the-hand pat? Okay. We will swallow hard and pretend airline attackers are randomly distributed in the population.
But now you insist on a full-body scan, a fairly accurate representation of my naked image to be viewed by a total stranger? Or alternatively, the full-body pat-down, which, as the junk man correctly noted, would be sexual assault if performed by anyone else?
This time you have gone too far, Big Bro’. The sleeping giant awakes. Take my shoes, remove my belt, waste my time and try my patience. But don’t touch my junk.
Charles Krauthammer, Washington Post
THE politics of the latest attacks by Hindu nationalists on Indian authors is not terribly hard to divine. One extremist bunch, the Rashtriya Swayamsevak Sangh (RSS), an outfit often banned by India’s government, has threatened Arundhati Roy, a prize-winning Indian novelist turned political activist. Ms Roy’s crime? That in recent weeks she dared to speak out in favour of protesting (Muslim) Kashmiris, some 110 of whom have been killed in a police crackdown that began in the summer. Ms Roy’s call for an inquiry into those deaths has lead the RSS to demand that she be charged with sedition. Hindu Nationalists reportedly attacked Ms Roy’s home in Delhi at the end of October, determined to settle scores personally.
This followed a similar move by another Hindu outfit to ban a book by Rohinton Mistry, an Indian-born Canadian novelist. In this case the thuggish Shiv Sena, a powerful political party in the western state of Maharashtra, has fiercely objected to Mr Mistry’s “Such a Long Journey”, a novel that has become part of the university curriculum in Mumbai, the state capital. At issue is the fact that the book lampoons Bal Thackeray, a Mumbai kingpin who founded the Hindu nationalist Shiv Sena party over four decades ago. Aditya Thackeray, his grandson and a student in Mumbai, helped to whip up a storm against the novel, ultimately encouraging the university to drop the book from its classes. Even the chief minister of the state has called “Such a Long Journey” abusive.
But why make a fuss now, considering the book was published nearly 20 years ago? It seems that the young Mr Thackeray has political ambitions of his own, and this was a handy way to draw attention to his Hindu nationalist credentials. Indeed, Shiv Sena has a reputation for being tetchy towards even moderate Hindus who dare to suggest that Muslims or Pakistanis might have views worth listening to. Early this year when Shah Rukh Khan, a Bollywood star, pointed out the stupidity of leaving Pakistani cricketers out of the Indian Premier League, the elder Mr Thackeray threatened to disrupt the release of his latest film.
Hindu nationalists work up a lather in such cases to put pressure on the Congress party, which looks powerful at the national level but much less so at the state level. If Congress slips in Maharashtra, where a property scandal could yet bring down many of Congress’s leaders, the likely political beneficiaries would be Hindu nationalists of various stripes, including Shiv Sena and the Thackerays.
As for Mr Mistry, the 58-year-old author has published just three novels, but each has been shortlisted for the Man Booker prize. Though slow, he is an elegant writer. When his last novel, “Family Matters”, came out in 2002, The Economist praised him as “one of the best of the Indian writers in English”. His publisher is eagerly awaiting his latest book, which is nearly finished. A new book would raise Mr Mistry’s profile among critics and readers, and perhaps stiffen the spine of Mumbai university.
Full article and photo:
On Nov. 4, Anderson Cooper did the country a favor. He expertly deconstructed on his CNN show the bogus rumor that President Obama’s trip to Asia would cost $200 million a day. This was an important “story.” It underscored just how far ahead of his time Mark Twain was when he said a century before the Internet, “A lie can travel halfway around the world while the truth is putting on its shoes.” But it also showed that there is an antidote to malicious journalism — and that’s good journalism.
In case you missed it, a story circulated around the Web on the eve of President Obama’s trip that it would cost U.S. taxpayers $200 million a day — about $2 billion for the entire trip. Cooper said he felt impelled to check it out because the evening before he had had Representative Michele Bachmann of Minnesota, a Republican and Tea Party favorite, on his show and had asked her where exactly Republicans will cut the budget.
Instead of giving specifics, Bachmann used her airtime to inject a phony story into the mainstream. She answered: “I think we know that just within a day or so the president of the United States will be taking a trip over to India that is expected to cost the taxpayers $200 million a day. He’s taking 2,000 people with him. He’ll be renting over 870 rooms in India, and these are five-star hotel rooms at the Taj Mahal Palace Hotel. This is the kind of over-the-top spending.”
The next night, Cooper explained that he felt compelled to trace that story back to its source, since someone had used his show to circulate it. His research, he said, found that it had originated from a quote by “an alleged Indian provincial official,” from the Indian state of Maharashtra, “reported by India’s Press Trust, their equivalent of our A.P. or Reuters. I say ‘alleged,’ provincial official,” Cooper added, “because we have no idea who this person is, no name was given.”
It is hard to get any more flimsy than a senior unnamed Indian official from Maharashtra talking about the cost of an Asian trip by the American president.
“It was an anonymous quote,” said Cooper. “Some reporter in India wrote this article with this figure in it. No proof was given; no follow-up reporting was done. Now you’d think if a member of Congress was going to use this figure as a fact, she would want to be pretty darn sure it was accurate, right? But there hasn’t been any follow-up reporting on this Indian story. The Indian article was picked up by The Drudge Report and other sites online, and it quickly made its way into conservative talk radio.”
Cooper then showed the following snippets: Rush Limbaugh talking about Obama’s trip: “In two days from now, he’ll be in India at $200 million a day.” Then Glenn Beck, on his radio show, saying: “Have you ever seen the president, ever seen the president go over for a vacation where you needed 34 warships, $2 billion — $2 billion, 34 warships. We are sending — he’s traveling with 3,000 people.” In Beck’s rendition, the president’s official state visit to India became “a vacation” accompanied by one-tenth of the U.S. Navy. Ditto the conservative radio talk-show host Michael Savage. He said, “$200 million? $200 million each day on security and other aspects of this incredible royalist visit; 3,000 people, including Secret Service agents.”
Cooper then added: “Again, no one really seemed to care to check the facts. For security reasons, the White House doesn’t comment on logistics of presidential trips, but they have made an exception this time. He then quoted Robert Gibbs, the White House press secretary, as saying, “I am not going to go into how much it costs to protect the president, [but this trip] is comparable to when President Clinton and when President Bush traveled abroad. This trip doesn’t cost $200 million a day.” Geoff Morrell, the Pentagon press secretary, said: “I will take the liberty this time of dismissing as absolutely absurd, this notion that somehow we were deploying 10 percent of the Navy and some 34 ships and an aircraft carrier in support of the president’s trip to Asia. That’s just comical. Nothing close to that is being done.”
Cooper also pointed out that, according to the Congressional Budget Office, the entire war effort in Afghanistan was costing about $190 million a day and that President Bill Clinton’s 1998 trip to Africa — with 1,300 people and of roughly similar duration, cost, according to the Government Accountability Office and adjusted for inflation, “about $5.2 million a day.”
When widely followed public figures feel free to say anything, without any fact-checking, we have a problem. It becomes impossible for a democracy to think intelligently about big issues — deficit reduction, health care, taxes, energy/climate — let alone act on them. Facts, opinions and fabrications just blend together. But the carnival barkers that so dominate our public debate today are not going away — and neither is the Internet. All you can hope is that more people will do what Cooper did — so when the next crazy lie races around the world, people’s first instinct will be to doubt it, not repeat it.
Thomas L. Friedman, New York Times
Bill O’Reilly wants my head.
On Thursday night, the Fox News host asked, as part of a show that would be seen by 5.5 million people: “Does sharia law say we can behead Dana Milbank?” He then added, “That was a joke.”
Hilarious! Decapitation jokes just slay me, and this one had all the more hilarity because the topic of journalist beheadings brings to mind my late friend and colleague Danny Pearl, who replaced me in the Wall Street Journal’s London bureau and later was murdered in Pakistan by people who thought sharia justified it.
The next night, O’Reilly read a complaint from one of his viewers, Heidi Haverlock of Cleveland, who said: “I thought the joke about whether sharia law would allow the beheading of the Washington Post guy was completely inappropriate.” O’Reilly replied to her on air: “Well, let me break this to you gently, Heidi. If Dana Milbank did in Iran what he does in Washington, he’d be hummus.”
O’Reilly is partly right about that. As an American and a Jew, I probably wouldn’t last long in Iran. And criticizing the government there, as I do here, wouldn’t add to my life expectancy. But what was he trying to say? That America would be better if it were more like Iran?
O’Reilly’s on-air fantasizing about violent ends for me was precipitated by a column I wrote describing Fox News’s election-night coverage as a victory party for the Republicans. This didn’t strike me as a terribly controversial point, but it evidently offended O’Reilly. “He said there were no Democrats except for Schoen on,” O’Reilly complained. “It was an outright lie.”
That would have been an outright lie, except that I said no such thing. I wrote: “To be fair and balanced, Fox brought in a nominal Democrat, pollster Doug Schoen. ‘This is a complete repudiation of the Democratic Party,’ he proclaimed.”
Though I didn’t claim Schoen was the sole Democrat, in hindsight I should have quoted other putative liberals who appeared on Fox that night – and sounded much like Schoen. There was Bob Beckel, proclaiming: “I feel like the blind guy whose guide dog died” and “I give all the credit to Republicans on this.” Or Juan Williams on President Obama: “I just don’t think he gets it.”
I suspect O’Reilly’s fury – he went after me on three consecutive nights last week – has less to do with one sentence in one column than with a book and a series of columns I’ve written about O’Reilly’s colleague Glenn Beck. I’ve argued that Beck, with his talk of violence, Nazis and conspiracy theories, is all but inviting fringe characters to take up arms. I’ve held O’Reilly up as a responsible alternative to Beck – but O’Reilly seems determined to prove this wrong.
On Thursday night, he made an eerie reference to The Post’s editorial page editor. “Would you put Fred Hiatt’s picture up on the screen here?” he asked. “This is the editor, Milbank’s editor, Fred Hiatt. And, Fred won’t do anything about Milbank lying in his column. I just want everybody in America to know what The Washington Post has come to. All right, you can take Fred’s picture off. Fred, have a nice weekend, buddy.”
Shortly after this, O’Reilly proposed to his fellow Fox News host, Megyn Kelly, a way to handle their disagreement with me: “I think you and I should go and beat him up.”
The two continued on to a discussion of the attempt to bar sharia law in Oklahoma. That’s when he made his little “joke” about beheading me, which led to his talk the next night about garbanzo puree.
Kelly, too, took issue with what I wrote, but to her credit she didn’t join in O’Reilly’s violent fantasies. “When somebody missteps, especially when it comes to any sort of speech or expression of opinion, the answer is to have more speech and opinion,” she said.
“I’m not trying to muzzle the guy,” O’Reilly replied.
True. You don’t need a muzzle if your head has been cut off.
O’Reilly has every right to quarrel with my opinion or question my accuracy. But why resort to intimidation and violent imagery? I don’t believe O’Reilly really wants to sever my head, but if only one of his millions of viewers interprets his message otherwise, that’s still a problem for me. Already, Beck fans have been accused of a police killing, threatening to kill a senator and having a highway shootout en route to an alleged attack on liberal groups.
Let’s drop the thuggish tactics – before more people get hurt.
Dana Milbank, Washington Post
Full article and photo:
As a boy in 1940s Greece, my friend Costas, now a retired banker, had a pistol shoved in his face by a communist guerrilla screaming that he wanted to requisition the family mule. Knowing that the animal meant his family’s survival in desperate times, Costas refused. He might have been shot then and there if the guerrilla had not been restrained by more compassionate comrades. Many years later, attending his nephew’s wedding in Athens, Costas was stunned to recognize the best man. It was the very fellow who had nearly killed him over a mule.
Such stories are common in Greece, where a merciless occupation by Germans and Italians during World War II, violence between left and right, and foreign meddling during the civil war (roughly 1945-49) and the Junta years (1967-74) left Greeks living cheek by jowl with people they could never forgive.
Kevin Andrews experienced the dangers of the countryside during the civil war. “The Flight of Ikaros,” the book he produced from his travels, remains not only one of the greatest we have about postwar Greece—memorializing a village culture that has almost vanished—but also one of the most moving accounts I have ever read of people caught up in political turmoil. (It is richer than George Orwell’s “Homage to Catalonia” because Andrews spent more time getting to know the people he wrote about.) “Flight” was first published in 1959 and last reprinted by Penguin in 1984. For too many years, this rare account has languished out of print.
Kevin Andrews posing in the ruins of Mistras during his travels through the Peloponnese in the early 1950s.
Born half English, half American in China in 1924, Andrews saw combat with the 10th Mountain Division in Italy, graduated from Harvard in 1947, and set out to study archaeology in Greece. A fellowship allowed him to spend years in country, working on a study of the ruins of the medieval fortresses of the Pel o pon nese. The research was largely conducted on foot in perilous times, when the mountains hid bands of guerillas and the rugged villages were full of soldiers and suspicious police.
Though “Flight” occasionally sketches the larger political picture—the sources of the civil war and effects of the Marshall Plan—Andrews’s interests are consistently in the ordinary people he encounters. His politics were clearly of the left, yet many of the villagers he befriended were rightists, royalists or worse. Kostandí, a hardened killer living near the ruins of the Byzantine city of Mistras, is exuberantly generous with his foreign friend. His wife grows to trust Andrews enough to tell him the gripping story of a recent battle for the hilltop castle, where the guerrillas charged outnumbered soldiers. In her telling, it merges with a crazy feud between Kostandí and his brother: “I said, ‘Eh, Kotso, did you kill him?’ because I saw his clothes, his hands, his whole body covered with blood, but he only laughed. ‘Him? No, he got away through the upper gate. He’ll be halfway across Taygetos by now. Why do you look at me like that? This morning I killed sixteen men near Pend’ Alónia. Now give me my baby and take my clothes and wash them.’ “
The villagers were intrigued by the foreigner dressed in rags who was as happy sleeping under the stars as in their homes, and he captured their speech and manners. He had a perfect ear for the conversation—the rumors, the paranoia, the generosity—of rural Greeks. Roger Jinkinson’s biography of Andrews, “American Ikaros” (2010), suggests that he was a difficult man, lacking empathy for others. You would never guess it from the affectionate portraits in “Flight.” The book is full of intimate dramas. On a train to Athens he meets an old man and his youngest son, Greeks who had been forced to leave Romania after the war and were essentially interned by the Greek government. ” ‘You who come from America,’ said the boy, ‘tell me, is it possible to live there like a human being? Is there a place in the world where one can live like a human being?’—he repeated the phrase bitterly.” After the father explains their woes—” ‘Sorrow lasts as long as life. Life is long; I can never remember a time when I was not alive,’ he murmured absently”—Andrews shares some “dusty, half-squashed grapes” with them so they can satisfy the demands of hospitality.
Later, Andrews becomes koumbáros (godfather) to the child of a royalist shepherd, Andoni, a man who finds a visit to Athens utterly baffling:
He looked across the shabby, humble, sprawling little town to the blurred outlines of the mountains he knew better, and then back up at the columns of the Parthenon, and said, “Who made these things, Koumbáre?”
“People who lived here thousands of years ago.”
And he said, “Things like this are from God.”
Distracted by his study of castles and a climb up Mount Olympus, Andrews took a long time getting back to his godson’s family in the book. When he finally did, it was to acknowledge that he would soon return to America and did not know whether he would see them again. “At last Andoni and I sat alone over the end of our meal. One of the girls came in and put on the table a bag full of biscuits she and her mother had baked that morning, a bottle of some kind of red syrup and a jar of sweets. ” ‘For you, Godfather,’ she murmured softly, looking at me; then she lowered her eyes and went out of the room. I sat gazing at the objects on the table and suddenly turned my face to the wall. Andoni leaped up and clasped my head in his arm.”
A life in America did not work out. Andrews eventually married a daughter of the poet E.E. Cummings and returned to Greece. The couple separated during the Junta years, she taking their children abroad while he stubbornly stayed on—in 1975 he renounced his American citizenship in fury over our support of the absurd and incompetent government. Among his books, most of which remain nearly impossible to find, are two studies of Athens, two longer poems and a volume called “Greece in the Dark: 1967-1974,” perhaps the best account in English of resistance to the colonels’ regime. It stirringly re-creates the major protest marches as well as the funeral of Greece’s first Nobel Prize-winning poet, George Seferis. One day in 1989, Andrews set out to swim the rough waters off Kythera. He was heading for Avgó (Egg), a little islet said to be Aphrodite’s birthplace. His body was recovered the next day.
“Does anything impoverish like caution?” Andrews asked, and reading his books most of us will feel a twinge of regret about our more conventional paths, likely combined with relief at having avoided many of his mistakes. Few would call Andrews’s life a success. He was too much a loner, too contrary, and though he wrote much—including a long-labored over, probably unfinished novel—he published little and obscurely. But he left behind at least one indisputably great book. “The Flight of Ikaros” is evocative and painful; restrained and full of compassionate feeling. Here are Greeks in all their flinty reality, their contradiction, their resistance.
Mr. Mason teaches at Colorado College. His latest book is a memoir, “News From the Village: Aegean Friends.”
Full article and photo:
A great deal of what public figures have said about the proposed Islamic cultural center near ground zero in Lower Manhattan has been aimed at playing off fear and intolerance for political gain. Former Justice John Paul Stevens of the Supreme Court, on the other hand, delivered one of the sanest and most instructive arguments for tolerance that we have heard in a long time.
Justice Stevens, who retired at the end of the court’s last term, served for two and a half years as an intelligence officer in Pearl Harbor during World War II. In a speech on Thursday in Washington, he confessed his initial negative reaction decades later at seeing dozens of Japanese tourists visiting the U.S.S. Arizona memorial.
“Those people don’t really belong here,” he recalled thinking about the Japanese tourists. “We won the war. They lost it. We shouldn’t allow them to celebrate their attack on Pearl Harbor even if it was one of their greatest victories.”
But then Justice Stevens said that he recognized his mistake in “drawing inferences” about the group of tourists that might not apply to any of them. “The Japanese tourists were not responsible for what some of their countrymen did decades ago,” he said, just as “the Muslims planning to build the mosque are not responsible for what an entirely different group of Muslims did on 9/11.”
Many Muslims who pray in New York City mosques, he added, “may well have come to America to escape the intolerance of radicals like those who dominate the Taliban.” Descendants of pilgrims “who came to America in the 17th century to escape religious persecutions” and helped establish our democracy should get that, he said.
Justice Stevens ended with a powerful message that participants in the debate over the mosque and community center in Lower Manhattan should heed: “Ignorance — that is to say, fear of the unknown — is the source of most invidious prejudice.”
Taking on statism’s pride and joy, the BBC.
On the night of June 21, 1966, Oliver Smedley, who operated a pirate radio station off the coast of England, shot a rival named Reg Calvert during a heated confrontation at Smedley’s home outside London. Calvert died instantly, but there were other victims—pirate radio itself and, it seemed, Smedley’s dream of using that colorful, ephemeral medium to help roll back the British welfare state.
The phrase pirate radio conjures an image of wild times on the high seas as free-spirited DJs in the 1960s stick it to The Man by giving the kids their rock ‘n’ roll. But Adrian Johns’s “Death of a Pirate” is more concerned with Friedrich von Hayek and “The Road to Serfdom” than with Mick Jagger and the Rolling Stones. Mr. Johns, a University of Chicago history professor who specializes in intellectual property, portrays the British radio pirates not in the warm glow of sentimental memory that the period usually enjoys but in the historian’s cold bright light. “Death of a Pirate” is, in its way, a treasure.
At the center of the tale stands Oliver Smedley, a conservative political activist and entrepreneur determined to stop what he saw as Britain’s slide toward socialism. After dabbling in politics and journalism in the 1950s, he launched a network of think tanks and political organizations that pressed his call to cut taxes, slash public spending, eliminate tariffs and reduce government’s role in economic life. When in 1964 two like-minded acquaintances pitched him on the idea of launching a pirate-radio ship, Smedley seized on the project as a chance to trade talk for action by taking on statism’s pride and joy, the BBC.
The BBC is a nonprofit “state corporation” funded primarily by an annual license fee (currently about $200) charged to every television owner. At its founding in 1922, the BBC was designated as the sole provider of radio programming in the United Kingdom. Unofficially, the Beeb was expected to reinforce a traditional view of British culture and life. The programming was a highbrow blend of mostly classical music and lectures. Commercials were forbidden for their alleged coarsening effect. Critics of laissez-faire capitalism, including John Maynard Keynes, cited the BBC’s “success” in delivering a vital service to the masses as proof that public corporations were the answer to the free market’s problems.
Oliver Smedley was eager to demonstrate otherwise. His Radio Atlanta would show the benefits of giving people what they desired instead of what central planners thought they should get. The station would sell commercials not only to make a profit but also to deliver knowledge that is essential to the efficient operation of a market economy. Smedley raised capital, created the convoluted corporate structure necessary to skirt British law, set up an advertising sales operation, bought a ship, fitted it with the necessary broadcast gear and sent it to sea—where it immediately began leaking money.
Radio Atlanta wasn’t alone in that predicament. Advertisers were reluctant to spend money with pirate stations—there were about 10—that might be made to disappear the following week by forces of nature or government. Radio Atlanta was also hampered by its programming. Contrary to myth, not all British pirates were full-time rockers, even if plenty of British kids were dying to hear rock music on that must-have new gadget, the transistor radio. The legendary Radio Caroline, for example, featured a music mix that ranged from the Beatles and Searchers to the Mantovani Orchestra and West End show tunes. Radio Atlanta’s offerings were so staid, says Mr. Johns, that “at times they could even sound distinctly similar to BBC fare.”
What’s more, many of the pirate-radio operators were dreadful businessmen. Smedley and other owners seriously underestimated the cost of building and operating a pirate station. In July 1964, just weeks after launching Radio Atlanta, Smedley entered an uneasy partnership with the rival Radio Caroline. The following year, he sold his station’s meager assets to Caroline in a bid to pay off his creditors. Undeterred, Smedley then formed an informal “alliance” with another pirate operation, Radio City, which broadcast from Shivering Sands, an abandoned antiaircraft gun emplacement in the Thames estuary.
Radio City owner Reg Calvert was a streetwise dance-hall impresario who used the airwaves to promote his stable of aspiring rock and pop stars, including Screaming Lord Sutch, who became Radio City’s star DJ. Calvert unwisely regarded the tapped-out Smedley as a potential source of capital; Smedley coveted the Shivering Sands facility. But Calvert, frustrated with Smedley’s failure to deliver promised equipment and payments, soon began talks with yet another pirate, American-owned Radio London. Matters came to a head in June 1966 when a gang of strike-idled dockworkers hired by Smedley seized Shivering Sands and expelled the Radio City staff. The move prompted the fatal confrontation at Smedley’s house.
Smedley pleaded self-defense and was acquitted. But Reg Calvert’s death and the resulting headlines forced the British government to address what appeared to be an out-of-control situation. Unfortunately for the authorities, radio piracy wasn’t illegal. Parliament rectified that situation by passing a marine “broadcast offences” act outlawing offshore radio stations and, more important, forbidding British companies to advertise on them. By late 1967, the pirate armada had largely been swept from the seas.
While Radio Caroline is the best-remembered of the renegade stations—the 2009 film “Pirate Radio” is loosely based on its story—the nearly forgotten Oliver Smedley, who died in 1989, was arguably the most successful buccaneer of the bunch. After all, as Mr. Johns notes, the pirate-radio episode sparked just the sort of transformation in British broadcasting that Smedley had envisioned. The government soon licensed commercial radio stations, the BBC accepted pop music and even adopted a more skeptical stance toward officialdom. Smedley succeeded beyond any reasonable expectation in spotlighting the flaws of state media and, by extension, state-controlled business.
Mr. Bloomquist is president of the consulting firm Talk Frontier Media.
Full article and photo:
No meat, no wool, no coffee or candles to read by, but plenty of high aspirations—and trouble.
In 1843, in the quiet middle of Massachusetts, a group of high-minded people set out to create a new Eden they called Fruitlands. The embryonic community miscarried, lasting only seven months, from June to January. Fruitlands now has a new chronicler in Richard Francis, a historian of 19th-century America. “This is the story,” he writes, “of one of history’s most unsuccessful utopias ever—but also one of the most dramatic and significant.” As we learn in his thorough and occasionally hilarious account, the claim is about half right.
The utopian community of Fruitlands had two progenitors: the American idealist Bronson Alcott and the English socialist Charles Lane. Alcott was a farm boy from Connecticut who had turned from the plough to philosophy. According to Ralph Waldo Emerson, his friend, Alcott could not chat about anything “less than A New Solar System & the prospective Education in the nebulae.” Airy as his thoughts were, Alcott could be a mesmerizing speaker. Indeed, his words partly inspired an experimental community in England, where he met Lane.
Lane has often been considered the junior partner in the Fruitlands story, merely the guy who put up the money (for roughly 100 acres, only 11 of which were arable). But Mr. Francis fleshes him out, showing him to be a tidier and more bitter thinker than Alcott, with a practical streak that could be overrun by his hopes for humanity.
As Mr. Francis notes, Alcott and Lane shared a “tendency to take moderation to excess,” pushing their first principles as far as they could go. One such principle was that you should do no harm to living things, including plants. As Mr. Francis explains: “If you cut a cabbage or lift a potato you kill the plant itself, just as you kill an animal in order to eat its meat. But pluck an apple, and you leave the tree intact and healthy.”
The Fruitlands community never numbered more than 14 souls, five of them children. The members included a nudist, a former inmate of an insane asylum, and a man who had once gotten into a knife fight to defend his right to wear a beard. Then there was the fellow who thought swearing elevated the spirit. He would greet the Alcott girls: “Good morning, damn you.” Lane thought the members should be celibate; Alcott’s wife, Abigail, the mother of his four daughters and the sole permanent woman resident, was a living reproach to this view.
All of Fruitlands members, however, agreed to certain restrictions: No meat or fish; in fact nothing that came from animals, so no eggs and no milk. No leather or wool, and no whale oil for lamps or candles made from tallow (rendered animal fat). No stimulants such as coffee or tea, and no alcohol. Because the Fruitlanders were Abolitionists, cane sugar and cotton were forbidden (slave labor produced both). The members of the community wore linen clothes and canvas shoes. The library was stocked with a thousand books, but no one could read them after dark.
And how did the whole experiment go? Well, most of the men at Fruitlands had little farming experience. Alcott, who did, impressed Lane with his ability to plow a straight furrow; but Alcott was always a better talker than worker. The community rejected animal labor—and even manure, a serious disadvantage if you want to produce enough food to be self-sufficient. The farming side of Fruitlands was a dud.
But the experiment was indeed, as Mr. Francis claims, “dramatic.” The drama came from a common revolutionary trajectory in which “a group of idealists ends by trying to destroy each other.” “Of spiritual ties she knows nothing,” Lane wrote of Abigail. “All Mr. Lane’s efforts have been to disunite us,” she confided to a friend, referring to her relations with Bronson. Even the usually serene Bronson agonized: “Can a man act continually for the universal end,” he asked Lane, “while he cohabits with a wife?” By Christmas, which he spent in Boston, Bronson seemed on the verge of dissolving his family. In the new year he returned to Fruitlands, but he had a breakdown. This was no way to run a utopia, and the experiment ended.
Was Fruitlands “significant”? In Mr. Francis’s reading, the community “intuited the interconnectedness of all living things.” That intuition, he believes, underlies our notions of the evils of pollution and the imminence of environmental catastrophe, as well as our concerns about industrialized farming. The Fruitlanders’ understanding of the world, he argues, helped create a parallel universe—an alternative to scientific empiricism—that is still humming along in the current day.
Perhaps so. Certainly many New Age and holistic notions, in their fuzzy and well-meaning romanticism, share a common ancestor with the Fruitlands outlook. But the result is not always benign. It was the Fruitlanders’ belief, for instance, that “all disease originates in the soul.” One descendant of this idea is the current loathsome view that cancer is caused by bad thoughts.
Though obviously sympathetic to the Fruitlands experiment, Mr. Francis gives us enough facts to let us draw our own conclusions. He records Bronson and Abigail’s acts of charity, already familiar to us from their daughter Louisa’s novel “Little Women” (1868). But he also retells less admiring stories, of their petty vindictiveness and casual callousness. Along the way he adumbrates the ways in which idealism can slide into megalomania.
Mr. Francis reports a conversation that Alcott once had with Henry James Sr., the father of the novelist Henry and the philosopher William. Alcott let it drop that he, like Jesus and Pythagoras before him, had never sinned. James asked whether Alcott had ever said, “I am the Resurrection and the Life.” “Yes, often,” Alcott replied. Unfortunately, Mr. Francis fails to record James’s rejoinder: “And has anyone ever believed you?”
Ms. Mullen writes for the Barnes & Noble Review.
Full article and photo:
SILVIO BERLUSCONI’S opponents have tried everything to get rid of him. They have manoeuvred against him, decried his policies, condemned his methods and, he has long claimed, incited left-wing prosecutors to try and jail him.
But since October 26th, a new possibility has emerged: that the 74-year-old Mr Berlusconi might just be laughed off the Italian political stage. Some of the details of the latest scandal to engulf him are such that even his most faithful supporters must realise he makes Italy an object of derision.
The girl at the centre of the affair—a 17-year-old Moroccan runaway—calls herself Ruby Rubacuori, or “Ruby Heartstealer”. On her Facebook page, her activities include belly dancing, and before she became involved with Italy’s prime minister she appears to have worked in Milan nightclubs.
The precise nature of their involvement is unclear. “Ruby”—whose real name appears to be Karima El Mahroug—said in an interview published on Saturday that she visited Mr Berlusconi’s home outside Milan only once, on Valentine’s Day this year, and that after giving him an account of her misfortunes, he gave her €7,000 ($9,770) and some jewellery. But, according to leaked details from an inquiry in Milan, she had earlier told police and prosecutors that she had been there three times, and that one of the parties ended in an erotic game called “Bunga, Bunga”.
Unsurprisingly, this has led to any number of jokes and even a song performed on Italian network television to the tune of Shakira’s Waka Waka World Cup anthem.
However amusing to others, the affair is potentially serious for Mr Berlusconi. Three close associates of the prime minister are reportedly under investigation on suspicion of aiding and abetting prostitution on the basis of Ms El Mahroug’s depositions. She denies having had sex with the prime minister, but the investigators are looking into whether others did, and were rewarded for doing so.
That would not incriminate Mr Berlusconi. But it might be enough to bring charges against his associates, who are suspected of procuring the women. One, a former showgirl called Nicole Minetti who is now a regional parliamentarian for Mr Berlusconi’s party, collected Ms El Mahroug after she was released from a police station in May.
The young Moroccan had been detained on suspicion of stealing €3,000, but was let go. The station commander said in an interview on October 29th that one of his officers had earlier received a call from the prime minister’s office informing them, erroneously, that Ms El Mahroug was the grand-daughter of Egypt’s president, Hosni Mubarak.
As opposition politicians swiftly noted, that could mean Mr Berlusconi had abused his position and thus committed an offence under Italian law. Far from denying it, the prime minister appears bent on defiance.
On October 29th, he admitted he had sent Ms Minetti “to provide help to someone who could have been consigned not to a home or the jails… but fostered”. Mr Berlusconi added that he had no intention of changing his lifestyle or explaining what went on at his home.
That sort of brazenness got him through the last bout of sex scandals in 2009. But there are several reasons for questioning whether it will work this time.
Mr Berlusconi is much weaker now. His poll ratings have fallen as Italians have becoming increasingly sceptical about his blithe assurances on the state of the economy. That has made them less tolerant of evidence of corruption in his government. And since July, when his former ally, Gianfranco Fini, formed a separate parliamentary group, the prime minister has been without an assured majority in the lower chamber.
Last year, most of the prime minister’s supporters went along with him as he ignored calls for a parliamentary statement, shrugged off claims he had laid himself open to blackmail and jauntily admitted he was no angel. But it was expected that he would save them from future embarrassment by being, if not more virtuous, then at least more discreet.
Mr Berlusconi has confounded that expectation, calling into question not just his private life but his judgement.
Full article and photo:
Whether we like it or not, human life is subject to the universal laws of physics.
My day, for example, starts with a demonstration of Newton’s First Law of Motion.
It states, “Every body continues in its state of rest, or of uniform motion in a straight line…”
“…unless it is compelled to change that state by forces impressed upon it.”
Based on supercomplicated physical observations, Einstein concluded that two objects may perceive time differently.
Based on simple life experience, I have concluded that this is true.
Newtonʼs Cradle shows how energy travels through a series of objects.
In our particular arrangement, kinetic energy is ultimately converted into a compression of the forehead.
The forehead can be uncrumpled by a downward movement of the jaw.
Excessive mechanical strain will compromise the elasticity of most materials, though.
The human body functions like a combustion engine. To produce energy, we need two things:
- Oxygen, supplied through the nostrils (once the toy car is removed, that is).
- Carbohydrates, which come in various forms (vanilla, chocolate, dulce de leche).
By the by: I had an idea for a carb-neutral ice cream.
All you need is to freeze a pint of ice cream to -3706 F.
The energy it will take your system to bring the ice cream up to a digestible temperature is roughly 1,000 calories, neatly burning away all those carbohydrates from the fat and sugar.
The only snag is the Third Law of Thermodynamics, which says it’s impossible to go below -459 F.
But back to Newton: he discovered that any two objects in the universe attract each other, and that this force is proportional to their mass.
The Earth is heavier than the Moon, and therefore attracts our bodies with a much greater force.
This explains why an empty refrigerator administrates a much smaller gravitational pull than, say, one thatʼs stacked with 50 pounds of delicious leftovers. Great: that means we can blame the leftovers.
(Fig. A): Letʼs examine the behavior of particles in a closed container.
(Fig. B): The more particles we squeeze into the container, the testier they will become, especially if the container happens to be a rush-hour downtown local at 86th and Lex.
(Fig. C): Usually the particles will distribute evenly, unless there is a weird-looking puddle on the floor.
The probability of finding a seat on the subway is inversely proportional to the number of people on the platform.
Even worse, the utter absence of people is 100 percent proportional to just having missed the train.
To describe different phenomena, physicists use various units.
PASCALS, for example, measure the pressure applied to a certain area.
COULOMBS measure electric charge (that can occur if said area is a synthetic carpet)
DECIBELS measure the intensity of the trouble the physicist gets into because he didnʼt take off his shoes first.
Often those units are named after people to recognize historic contributions to their field of expertise. One NEWTON, for example, describes the force that is necessary to accelerate 1 kilogram of mass by one meter per second squared.
This is not to be confused with one NIEMANN, which describes the force necessary to make a three-year-old put on his shoes and jacket when weʼre already late for kindergarten.
Once the child is ready to go, I search for my keys. I start spinning around to scan my surroundings. This rotation exposes my head and all its contents to centrifugal forces, resulting in loss of hair and elongated eyeballs. That’s why I need to wear prescription glasses, which are yet another thing I constantly misplace.
Obviously, the hair loss theory I just presented is bogus. Hair canʼt be “lost.” Since Antoine Lavoisier, we all know that “matter can be neither created nor destroyed, though it can be rearranged,” which, sadly, it eventually will.
Not everything can be explained through physics, though. Iʼve spent years searching for a rational explanation for the weight of my wifeʼs luggage. There is none. It is just a cruel joke of nature.
Christoph Niemann, New York Times
Full article and photos: