Full article and photo: http://www.nytimes.com/interactive/2011/02/14/opinion/14oped-valentine.html
Full article and photo: http://www.nytimes.com/interactive/2011/02/14/opinion/14oped-valentine.html
Some of my favorite books are the ones I’ve never opened
Like most readers, I love browsing in bookshops and libraries. I enjoy running my fingers along the spines of books and reading titles and authors’ names, pulling the books out and flipping through them, thinking about the stories inside them.
There are hundreds of films I’ve never seen, thousands of songs I’ve never heard. But I don’t anticipate them the way that I do books. I don’t imagine the things I would learn from them, how my life would be subtly but surely different after I had experienced them. With books, the anticipation is different. In fact, with books, it is sometimes the best part.
Last week I bought a book. I looked at the blurb and read the first paragraph, and I could feel the texture of the book in my mind. It was going to be a steadily paced yet exciting coming-of-age story about three young girls who go camping in the woods, stumble across a couple vacationing in a cabin, and see things through the windows that upend their world. It would move from the girls in their clumsy tent, to their fable-like journey through the forest, to the glowing windows of the cabin. The story was going to be overflowing with the smell of mulching leaves, the stale sweetness of fizzy drinks on the tongue, the crackle of empty sweet wrappers. It was going to be honest and real and uncomfortably sensual.
Except that it wasn’t about that at all: It was a thriller about a woman having an affair. With every sentence I read, the book I had imagined shrank smaller and smaller. By the end of the third page, it had disappeared. The actual book was by no means bad, it just wasn’t the book I thought it would be. That dense, bittersweet story I had anticipated reading did not exist, and I felt a sense of loss, a yearning for something unreal. And yet somehow I had read that nonexistent book, because I had created it myself. I was not changed by the experience of reading that book, but perhaps I was changed by my own anticipation of what it could have been.
So I save books. I buy a book with every intention of reading it, but then the more I look at it and think about how great it is going to be, the less I want to read it. I know that it can’t possibly live up to my expectations, and slowly, the joy of my own imaginings becomes more precious to me than whatever actually lies between the covers.
Most books I read just get chewed up and spat out. I enjoy them, but ask me in a year and all I’ll remember is a vague shape of plot, the sense of a character, perhaps the color of a sky in summer, or the taste of borscht. My favorite books, of course, do stay with me. They shift and color my world, and I am different for having read them. But before reading a book, there’s no way to know which it will be: a slick of lipstick that I wear for a day and then wipe off, or a tattoo that stays on my body forever. An unread book has the potential to be the greatest book I have ever read. Any unread book could change my life.
I currently have about 800 unread books on my shelves. Some would find this excessive, and they would probably be right. But to me, my imagined library is as personal and meaningful — or perhaps even more so — than the collection of books I have read. Each book is intense and vivid in my mind; each book says complex things about my life, history, and personality. Each book has taught me something about the world, or at least about my own expectations of the world, my idea of its possibilities. Here are some examples:
I think that Mervyn Peake’s “Gormenghast Trilogy” is at once claustrophobic and expansive. It has the texture of solid green leaves crunched between your molars. It tastes of sweetened tea and stale bread and dust. When I read it, I will feel close to my father because it is his favorite book. Reading the Gormenghast books will allow me to understand my father in ways I currently do not, and at certain points in the book I will put it down and stare into the middle distance and say aloud, “Oh. Now my childhood makes sense.”
Radclyffe Hall’s “The Well of Loneliness” will make me sad and proud and indignant. I will no longer get tangled up in discussions about gender issues, because I will finally have clear-cut and undeniable examples of how gender stereotyping is bad for everyone. Reading it will make me feel like an integral part of queer history and culture, and afterwards I will feel mysteriously connected to all my fellow LGBT people. Perhaps I will even have gaydar.
Roberto Bolaño’s “2666” is an obsessive and world-shifting epic. When I read it, I will be completely absorbed by it. It will be all I think about. It will affect my daily life in ways I can’t fully understand, and when I finish it, I will have come to profound revelations about the nature of existence. I will finally understand all the literary theory I wrote essays on when I was at college.
Manuel Puig’s “Kiss of the Spider Woman” has been on my shelves for 10 years, dragged from house to house without its spine ever being cracked. It was given to me by a friend when I was a teenager, and I cannot read it because when I do, I will finally understand my friend, and that scares me. Her life is a set of nesting dolls with something solid and simple at the center, and I do not know whether that thing is pure gold or sticky-dark tar-poison. Holding the book is holding my friend’s hand, and that is as close as I dare to get.
Jeff VanderMeer’s “City of Saints and Madmen” is an entire universe between two covers. It contains sculptures and mix tapes and skyscrapers and midwives and sacrifices, and everything else that exists in my own world, but with every edge crusted in gilt and mold. It will open my eyes to a new way of seeing, and when I finish it, I will somehow have been transformed from being just a person who writes into A Real Writer.
Anais Nin’s “Journals” will shatter my illusions and create new ones. Anais Nin is everything that I fear and hope that I am; when I read her “Journals,” she might be everything I think she is. This is thrilling and terrifying at the same time, because then I will be forced to emulate her life of complex heartaches, pearls and lace, all-day sleep, and absinthe-soaked dinner parties — and those things are just not practical. And, even more frightening, she may not be who I think she is. If she is not special, then no one can be special.
I am not ready for Françoise Sagan’s “Aimez-Vous Brahms.” At 18 I read her novella “Bonjour Tristesse” and I was transformed: This book held a truth I didn’t even know I knew. The protagonist of “Aimez-Vous Brahms” is 39, and so when I read it at 39 it will tell me the truth the same way that “Bonjour Tristesse” did when I was 18. Like 18, 39 is a the perfect meeting of anticipation and experience, and this book will guide me through into the next phase of my life.
I have not read these books because I worry that they’re not the books I think they are. I’m sure they are wonderful books, but no book could possibly contain all the knowledge and understanding I am expecting from these. Perhaps I will never read them. This is the same logic that means I will probably never visit Russia: I imagine that a trip to Russia will be the crux of my life. Every moment will be candy-colored buildings and strong coffee on silver platters, steam trains slipping past quaint farmhouses and huge black bears glimpsed through the snow, furred hats against my ears, and history seeping into my veins. I know that if I actually go to Russia, there will be moments where I don’t like the food, or my feet ache, or I can’t sleep, or I get annoyed at not being able to read the Cyrillic signs. If I keep it in my imagination, it stays pure and perfect.
There is another reason to leave books unread: because I know I really will love them. This might seem nonsensical, and I suppose it is. I am a writer, and I know that certain books will resonate deeply and perfectly because they are similar in some way to my own writing, though vastly better. This is why I have not read Alice Greenway’s “White Ghost Girls,” a short and lyrical novel about sisters in 1960s Hong Kong; or Francesca Lia Block’s fantastical erotica novellas, “Ecstasia” and “Primavera”; or Stewart O’Nan’s small-town ghost story, “The Night Country”; or anything ever written by Martin Millar.
I know that I will love them and want to learn from them, and so I don’t read them: firstly because it is tiring to read that way, with your eyes and ears and brain constantly absorbing; and secondly because once I read them they will be over, the mystery will be revealed. These books have affected my writing, and I haven’t even read them. Maybe we can learn as much from our expectations of a story as we can from the actual words on the page.
Try an experiment with me. It might seem odd at first, but go to your bookshelves and pick a book you have not read. Hold it in your hands. Look at the cover and read the description on the back. Think about what the story might be about, what themes might be in it, what it might say about the world you inhabit, whether it can make you imagine an entirely different world.
There is absolutely nothing wrong with that book. It might prove to be a great book, the best book you have ever read. But I suggest that the literary universe you have just created might be more exciting and enlightening than the one contained within those covers. Your imagination contains every possible story, every possible understanding, and any book can only be one tiny portion of that potential world.
Kirsty Logan is a fiction writer in Glasgow.
Electrified sand. Exploding balloons. The long and colorful history of weather manipulation.
This brutal winter has made sure that no one forgets who’s in charge. The snow doesn’t fall so much as fly. Cars stay buried, and feet stay wet. Ice is invisible, and every puddle is deeper than it looks. On the eve of each new storm, the citizenry engages in diligent preparations, rearranging travel plans, lining up baby sitters in case the schools are closed, and packing comfortable shoes for work so they’re not forced to spend all day wearing their awful snow boots.
One can’t help but feel a little embarrassed on behalf of the species, to have been involved in all this fuss over something as trivial as the weather. Is the human race not mighty? How are we still allowing ourselves, in the year 2011, to be reduced to such indignities by a bunch of soggy clouds?
It is not for lack of trying. It’s just that over the last 200 years, the clouds have proven an improbably resilient adversary, and the weather in general has resisted numerous well-funded — and often quite imaginative — attempts at manipulation by meteorologists, physicists, and assorted hobbyists. Some have tried to make it rain, while others have tried to make it stop. Balloons full of explosives have been sent into the sky, and large quantities of electrically charged sand have been dropped from airplanes. One enduring scheme is to disrupt and weaken hurricanes by spreading oil on the surface of the ocean. Another is to drive away rain by shooting clouds with silver iodide or dry ice, a practice that was famously implemented at the 2008 Olympics in Beijing and is frequently employed by farmers throughout the United States.
There’s something deeply and perennially appealing about the idea of controlling the weather, about deciding where rain should fall and when the sun should shine. But failing at it has been just as persistent a thread in the human experience. In a new book called “Fixing the Sky: The Checkered History of Weather and Climate Control,” Colby College historian of science James Rodger Fleming catalogs all the dreamers, fools, and pseudo-scientists who have devoted their lives to weather modification, tracing the delusions they shared and their remarkable range of motivations. Some wanted to create technology that would be of use to farmers, so that they would no longer have to operate at the mercy of unpredictable droughts. Others imagined scenarios in which the weather could be weaponized and used against foreign enemies. Still others had visions of utopia in which the world’s deserts were made fertile and every child was fed.
“Even some of the charlatans had meteorology books on their desks,” Fleming said last week. “Most had simple ideas: for instance, that hot air rises. These guys’ll have some sense of things, but they won’t have a complete theory of the weather system. They have a principle they fix on and then they try to build their scheme from there.”
What they underestimated, in Fleming’s view — what continues to stymie us all, whether we’re seeding clouds or just trying to plan for the next commute — is weather’s unfathomable complexity. And yet, the dream has stayed alive. Lately, the drive to fix the climate has taken the form of large-scale geoengineering projects designed to reverse the effects of global warming. Such projects — launching mirrors into space to reflect solar radiation away from the earth, for instance — are vastly more ambitious than anything a 19th-century rainmaker could have cooked up, and would employ much more sophisticated technology. What’s unclear, as one looks back at the history of weather modification research, is whether all that technology makes it any more likely that our ambitions will be realized, or if it just stands to make our failure that much more devastating.
The story of modern weather control in America, as Fleming tells it in “Fixing the Sky,” begins on a Wednesday morning in November of 1946, some 14,000 feet in the air above western Massachusetts. Sitting up there in a single-engine airplane was Vincent Schaefer, a 41-year-old scientist in the employ of the General Electric Research Laboratory, whose principal claim to fame up to that point was that he’d found a way to make plastic replicas of snowflakes. Prior to the flight, Schaefer had conducted an experiment that seemed to point toward a method for manipulating clouds using small bits of dry ice. If the technology could be exported from the lab and made to work in the real world, the potential applications would be limitless. With that in mind, flying over the Berkshires, Schaefer waited until his plane entered a suitable-looking cloud, then opened a window and let a total of six pounds of crushed dry ice out into the atmosphere. Before he knew it, “glinting crystals” of snow were falling obediently to the ground below.
GE announced the results of the demonstration the next day. “SNOWSTORM MANUFACTURED,” read the massive banner headline on the front page of The Boston Globe. The GE lab was deluged with letters and telegrams from people excited about the new technology for all sorts of reasons. One asked if it might be possible to get some artificial snow for use in an upcoming Christmas pageant. Another implored the company to aid a search-and-rescue effort at Mount Rainier by getting rid of some inconveniently located clouds. Hollywood producers inquired about doing up some blizzards for their movie sets. Separately, a state official from Kansas wrote to President Truman in hopes that GE’s snow-making technology could be used to help end a drought there. It seemed to be an all-purpose miracle, as though there was not a problem on earth it couldn’t fix.
Insofar as technological advancement in general is all about man’s triumph over his conditions, a victory over the weather is basically like beating the boss in a video game. And GE’s breakthrough came at a moment when the country was collectively keyed up on the transformative power of technology: World War II had just ended, and we suddenly had the bomb, radar, penicillin, and computers. But the results of Schaefer’s experiment would have inspired the same frenzied reaction in any era. “Think of it!” as one journalist wrote in a 1923 issue of Popular Science Monthly, about an earlier weather modification scheme, “Rain when you want it. Sunshine when you want it. Los Angeles weather in Pittsburgh and April showers for the arid deserts of the West. Man in control of the heavens — to turn on or shut them off as he wishes.”
It’s a longstanding, international fantasy — one that goes all the way back to ancient Greece, where watchmen stood guard over the skies and alerted their countrymen at the first sign of hail so that they might try to hold off the storm by quickly sacrificing some animals. The American tradition begins in the early 19th century, when the nation’s first official meteorologist, James “the Storm King” Espy, developed a theory of rainmaking that involved cutting down large swaths of forest and lighting them on fire. Espy had observed that volcanic eruptions were often followed by rainfall. He thought these fires would work the same way, causing hot air to rise into the atmosphere, cool, and thus produce precipitation.
For years he unsuccessfully sought funding from the government so that he might test his theory, describing in an 1845 open letter “To the Friends of Science” a proposal wherein 40-acre fires would be set every seven days at regular intervals along a 600-mile stretch of the Rocky Mountains. The result, he promised, would be regular rainfall that would not only ease the lives of farmers and make the country more productive but also eradicate the spread of disease and make extreme temperatures a thing of the past. He did not convince the friends of science, however, and lived out his distinguished career without ever realizing his vision.
Others had better luck winning hearts and minds. In 1871, a civil engineer from Chicago named Edward Powers published a book called “War and the Weather, or, The Artificial Production of Rain,” in which he argued that rainstorms were caused by loud noises and could be induced using explosives. He found a sympathetic collaborator in a Texas rancher and former Confederate general by the name of Daniel Ruggles, who believed strongly that all one had to do to stimulate rain was send balloons full of dynamite and gunpowder up into the sky and detonate them. Another adherent of this point of view was Robert Dyrenforth, a patent lawyer who actually succeeded in securing a federal grant to conduct a series of spectacular, but finally inconclusive, pyrotechnic experiments during the summer and fall of 1891.
A few decades after Dyrenforth’s methods were roundly discredited, an inventor named L. Francis Warren emerged with a new kind of theory. Warren, an autodidact who claimed to be a Harvard professor, believed that the trick to rainmaking wasn’t heat or noise, but electrically charged sand, which if sprinkled from the sky could not only produce rain but also break up clouds. His endgame was a squad of airplanes equipped to stop droughts, clear fog, and put out fires. It was Warren’s scheme that inspired that breathless Popular Science article, but after multiple inconclusive tests — including some funded by the US military — it lost momentum and faded away.
For the next 50 years, charlatans and snake-oil salesmen inspired by Warren, Dyrenforth, and the rest of them went around the country hawking weather control technologies that had no basis whatsoever in science. It wasn’t until after World War II, with the emergence of GE’s apparent success dropping dry ice into clouds, that the American public once again had a credible weather control scheme to get excited about. Once that happened, though, it was off to the races, and by the 1950s, commercial cloud seeding — first with dry ice, then with silver iodide — was taking place over an estimated 10 percent of US land. By the end of the decade, it was conventional wisdom that achieving mastery over the weather would be a decisive factor in the outcome of the Cold War. In 1971, it was reported that the United States had secretly used cloud-seeding to induce rain over the Ho Chi Minh Trail in hopes of disrupting enemy troop movements. In 1986, the Soviet Union is said to have used it to protect Moscow from radioactivity emanating from the Chernobyl site by steering toxic clouds to Belarus and artificially bursting them. There’s no way to know whether the seeding operation actually accomplished anything, but people in Belarus to this day hold the Kremlin in contempt for its clandestine attempt to stick them with the fallout.
You’d think, given mankind’s record of unflappable ingenuity, we would have had weather figured out by now. But after decades of dedicated experimentation and untold millions of dollars invested, the world is still dealing with droughts, floods, and 18-foot urban snowbanks. What is making this so difficult? Why is it that the best we can do when we learn of an approaching snowstorm is brace ourselves and hope our street gets properly plowed?
The problem is that weather conditions in any given place at any given time are a function of far too many independent, interacting variables. Whether it’s raining or snowing is never determined by any one overpowering force in the atmosphere: It’s always a complicated and unpredictable combination of many. Until we have the capability to micromanage the whole system, we will not be calling any shots.
Fleming, for his part, doesn’t believe that a single one of the weather modification schemes he describes in his book ever truly worked. Even cloud-seeding, he says, as widespread as it is even today, has never been scientifically proven to be effective. Kristine Harper, an assistant professor at Florida State University who is writing a book about the US government’s forays into weather modification, says that doesn’t necessarily mean cloud-seeding is a total waste of time, just that there’s no way to scientifically measure its impact.
“You’d be hard-pressed to find evidence even at this point that there’s a statistically significant difference between what you would get from a seeded cloud and an unseeded cloud,” she said.
The good news for practitioners of weather control is that amid all this complexity, they can convince themselves and others that they deserve credit for weather patterns they have probably had no role whatsoever in conjuring. The bad news for anyone who’d like to prevent the next 2-foot snow dump — or the next 2 degrees of global warming — is that there’s just no way to know. As Fleming’s account of the last 200 years suggests, it may be possible to achieve a certain amount by intervention. But it’s a long way from anything you could call control. Those who insist on continuing to shake their fists at the sky should make sure they have some warm gloves.
Leon Neyfakh is the staff writer for Ideas.
Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2011/02/06/cloud_control/
Full article and photo: http://www.nytimes.com/interactive/2011/01/30/opinion/20110130_Schott.html
The case that human athletes have reached their limits
Last summer, David Oliver tried to become one of the fastest men in the world. The American Olympic hurdler had run a time of 12.89 seconds in the 110 meters at a meet in Paris in July. The time was two-100ths of a second off the world record, 12.87, owned by Cuba’s Dayron Robles, a mark as impressive as it was absurd. Most elite hurdlers never break 13 seconds. Heck, Oliver seldom broke 13. He’d spent the majority of his career whittling down times from the 13.3 range. But the summer of 2010 was special. Oliver had become that strange athlete whose performance finally equaled his ambition and who, as a result, competed against many, sure, but was really only going against himself.
In Paris, for instance, Oliver didn’t talk about how he won the race, or even the men he ran against. He talked instead about how he came out of the blocks, how his hips dipped precariously low after clearing the sixth hurdle, how he planned to remain focused on the season ahead. For him, the time — Robles’s time — was what mattered. He had a blog and called it: “Mission: 12.85: The race for the record.” And on this blog, after Paris, Oliver wrote, “I am in a great groove right now and I can’t really pinpoint what set it off….Whatever groove I’m in, I hope I never come out of it!”
The next week, he had a meet in Monaco. The press billed it as Oliver’s attempt to smash the world record. But he had a terrible start — “ran the [worst] first couple of hurdles of the season,” as he would later write. Oliver won the race, but with a time of 13.01.
On his blog, Oliver did his best to celebrate; he titled his Monaco post, “I’m sitting on top of the world.” (And why not? The man had, after all, beaten the planet’s best hurdlers for the second-straight week, almost all of whom he’d see at the 2012 Olympics.) But the post grew defensive near the end. He reasoned that his best times should improve.
But they haven’t. That meet in Paris was the fastest he’s ever run.
Two recent, provocative studies hint at why. That Oliver has not broken Robles’s record has nothing to do with an unfortunate stumble out of the blocks or imperfect technique. It has everything to do with biology. In the sports that best measure athleticism — track and field, mostly — athletic performance has peaked. The studies show the steady progress of athletic achievement through the first half of the 20th century, and into the latter half, and always the world-record times fall. Then, suddenly, achievement flatlines. These days, athletes’ best sprints, best jumps, best throws — many of them happened years ago, sometimes a generation ago.
“We’re reaching our biological limits,” said Geoffroy Berthelot, one of the coauthors of both studies and a research specialist at the Institute for Biomedical Research and Sports Epidemiology in Paris. “We made major performance increases in the last century. And now it is very hard.”
Berthelot speaks with the bemused detachment of a French existentialist. What he predicts for the future of sport is just as indifferent, especially for the people who enjoy it: a great stagnation, reaching every event where singular athleticism is celebrated, for the rest of fans’ lives. And yet reading Berthelot’s work is not grim, not necessarily anyway. It is oddly absorbing. The implicit question that his work poses is larger than track and field, or swimming, or even sport itself. Do we dare to acknowledge our limitations? And what happens once we do?
It’s such a strange thought, antithetical to the more-more-more of American ideals. But it couldn’t be more relevant to Americans today.
In the early 1950s, the scientific community thought Roger Bannister’s attempt to break the four-minute mile might result in his death. Many scholars were certain of the limits of human achievement. If Bannister didn’t die, the thinking went, he might lose a limb. Or if no physiological barrier existed, surely a mathematical one did. The idea of one minute for one lap, and four minutes for four, imposed a beautiful, eerie symmetry — besting it seemed like an ugly distortion, and, hence, an impossibility. But Bannister broke the four-minute mark in 1954, and within three years 30 others had done it. Limitations, it seemed, existed only in the mind.
Except when they don’t. Geoffroy Berthelot began looking at track and field and swimming records in 2007. These were the sports that quantified the otherwise subjective idea of athleticism. There are no teammates in these sports, and improvement is marked scientifically, with a stopwatch or tape measure. In almost every other game, even stat-heavy games, athletic progression can’t be measured, because teammates and opponents temper results. What is achieved on these playing fields, then, doesn’t represent — can’t represent — the totality of achievement: Was Kareem Adbul-Jabbar a better basketball player than Michael Jordan because Abdul-Jabbar scored more career points? Or was Wilt Chamberlain better than them both because he scored 100 in a game? And where does this leave Bill Russell, who won more championships than anybody? By contrast, track and field and swimming are pure, the sporting world’s equivalent of a laboratory.
Berthelot wanted to know more about the progression of athletic feats over time in these sports, how and why performance improved in the modern Olympic era. So he plotted it out, every world record from 1896 onward. When placed on a L-shaped graph, the record times fell consistently, as if down a gently sloped hill. They fell because of improving nutritional standards, strength and conditioning programs, and the perfection of technique. But once Berthelot’s L-shaped graphs reached the 1980s, something strange happened: Those gently sloping hills leveled into plains. In event after event, record times began to hold.
The trend continued through the 1990s, and into the last decade. Today 64 percent of track and field world records have stood since 1993. One world record, the women’s 1,500 meters, hasn’t been broken since 1980. When Berthelot published his study last year in the online journal PLoS One, he made the simple but bold argument that athletic performance had peaked. On the whole, Berthelot said, the pinnacle of athletic achievement was achieved around 1988. We’ve been watching a virtual stasis ever since.
Berthelot argues that performance plateaued for the same reasons it improved over all those decades. Or, put another way, because it improved over all those decades. Records used to stand because some athletes were not well nourished. And then ubiquitous nutritional standards developed, and records fell. Records used to stand because athletes had idiosyncratic forms and techniques. And then through an evolution of experimentation — think high jumper Dick Fosbury and his Fosbury Flop — the best practices were codified and perfected, and now a conformity of form rules sport. Records used to stand because only a minority of athletes lifted weights and conditioned properly. Here, at least, the reasoning is a bit more complicated. Now everybody is ripped, yes, but what strength training also introduced was steroid use. Berthelot doesn’t name names, but he wonders how many of today’s records stand because of pharmacological help, the records broken during an era of primitive testing, before a foundation established the World Anti-Doping Agency in 1999. (This assumes, of course, that WADA catches everything these days. And it probably doesn’t.)
Berthelot isn’t the only one arguing athletic limitation. Greg Whyte is a former English Olympic pentathlete, now a renowned trainer in the United Kingdom and academic at the University of Wolverhampton, who, in 2005, coauthored a study published in the journal Medicine and Science in Sports and Exercise. The study found that athletes in track and field’s distance events were nearing their physiological limits. When reached by phone recently and asked about the broader scope of Berthelot’s study, Whyte said, “I think Geoffroy’s right on it.” In fact, Whyte had just visited Berthelot in Paris. The two hope to collaborate in the future.
It’s a convincing case Berthelot presents, but for one unaccounted fact: What to do with Usain Bolt? The Jamaican keeps torching 100- and 200-meter times, is seemingly beyond-human in some of his races and at the very least the apotheosis of progression. How do you solve a problem like Usain Bolt?
“Bolt is a very particular case,” Berthelot said. Only five track and field world records have been broken since 2008. Bolt holds — or contributed to — three of them: the 100 meters, 200 meters, and 4×100-meter relay. “All the media focus on Usain Bolt because he’s the only one who’s progressing today,” Berthelot said. He may also be the last to progress.
Another Berthelot paper, published in 2008, predicts that the end of almost all athletic improvement will occur around 2027. By that year, if current trends hold — and for Berthelot, there’s little doubt that they will — the “human species’ physiological frontiers will be reached,” he writes. To the extent that world records are still vulnerable by then, they will be improved by no more than 0.05 percent — so marginal that the fans, Berthelot reasons, will likely fail to care.
Maybe the same can be said of the athletes. Berthelot notes how our culture asks them — and in fact elite athletes expect of themselves — to always grow bigger, be stronger, go faster. But what happens when that progression stops? Or, put another way: What happens if it stopped 20 years ago? Why go on? The fame is quadrennial. The money’s not great. (Not for nothing: Usain Bolt said recently he’ll go through another Olympic cycle and then switch to pro football.) The pressure is excruciating, said Dr. Alan Goldberg, a sports psychologist who has worked with Olympic athletes, especially if they’re competing at a level where breaking a record is a possibility.
In a different sport but the same context, another individual performer, Ted Williams, looked back on his career and said, in the book “My Turn at Bat,” “I’m glad it’s over…I wouldn’t go back to being eighteen or nineteen years old knowing what was in store, the sourness and bitterness, knowing how I thought the weight of the damn world was always on my neck, grinding on me. I wouldn’t go back to that for anything.” Remember, this is from a man who succeeded, who, most important, broke records. What happens to the athlete who knows there are no records left to break? What happens when you acknowledge your limitations?
The short answer is, you create what you did not previously have. Swimming records, for instance, followed the same trend as track and field: a stasis beginning roughly in the mid-1980s. But in 2000, the sport innovated its way out of its torpor. The famous full-body LZR suits hit the scene, developed with NASA technologies and polyurethane, promising to reduce swimmers’ drag in the water. World records fell so quickly and so often that they became banal, the aquatic version of Barry Bonds hitting a home run. Since 2000, all but four of swimming’s records have been broken, many of them multiple times.
But in 2009 swimming’s governing body, FINA, banned the full-body LZR suits. FINA did not, however, ban the knee-to-navel suits men had previously worn, or the shoulder-to-knee suits women preferred. These suits were made of textiles or other, woven materials. In other words, FINA acknowledged the need for technological enhancements, even as it banned the LZR suits. As a result, world records still fall. Last month in Dubai, American Ryan Lochte set one in the 400-meter individual medley.
These ancient sports are a lot like the world’s current leading economies: stagnant, and looking for a way to break through. The best in both worlds do so by innovating, improving the available resources, and when that process exhausts itself, creating new ones. However, this process — whether through an increasing reliance on computers, or NASA-designed swimsuits, or steroids that regulators can’t detect — changes the work we once loved, or the sports we once played, or the athletes we once cheered.
It may not always be for the worse, but one thing is certain. When we address our human limits these days, we actually become less human.
Paul Kix is a senior editor at Boston magazine and a contributing writer for ESPN the Magazine.
Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2011/01/23/peaked_performance/
Fran Lebowitz, New York Times
Full article and photo: http://www.nytimes.com/interactive/2011/01/16/opinion/16lebowitz_opart.html
Full article and photo: http://www.nytimes.com/interactive/2011/01/18/opinion/20100119_Schott.html